In real world, TCP/IP tries to put massive bandwidth over unreliable links in unreliable environment, this tuning takes 99% of the effort. Byte stream over unreliable packets isn't really the optimal thing to do, especially if the next layer parses the byte stream back to delimited packets. But this is what we have.
Of course if you just assume that packet loss is small and malformed headers do not occur, then it's easy enough to write it to the specification and just use a buffer of latest packets for resending purposes.
Until you see that results in unsatisfactory performance, buffer bloat, awkward delays whenever a packet is lost, the cat video doesn't play properly.
Open source implementations have the best chances of evolving in the right direction. Heck, even Microsoft used BSD code for their TCP/IP stack!
There is one particular option which is surprisingly widely used: if you need a very limited kind of TCP, just don't use TCP, create your simple code over UDP!
But NorthGuy is right that there are special cases where custom solutions are not that hard to do. The risk is in misidentification; what if the use case grows? Well, nothing prevents you from turning back at that point. But the more time you spend in the "simple" implementation, the more attached you come to it. This is the danger of NIH.
But yes, dangers of NIH, while real, are often exaggerated, while dangers of code reuse and libraries, while equally real, are often belittled. You just need to understand the particular case you are working with really well to make an informed choice.