Your colleagues back then would probably not have used the term "unreliable", but rather "non-deterministic". And that's a fair characterization of Ethernet.
Firstly I suspect that he knows what his colleagues said and putting words into their mouths that suit your argument ought to be a bit infra dig. Secondly this is a custom build of a switched ethernet network inside a product, it isn't a general network. It can be made as deterministic as one wishes, if deterministic is even a necessary property for this particular network application.
I think we should read that as 'Your colleagues back then would probably have used the term "unreliable", as meaning "non-deterministic"' in the interest of fairness. That was what the circuit switching people told us IP people back in those days, so I think it fair to assume.
(This text below is not trying to lecture people who know better, I'm just trying to not make too many errors in describing to the audience the problem at hand, as I understand it. And perhaps learn something when I'm told how wrong I am.)
On the second point, determinism is just a matter of how much PDV (that which sales people call "jitter") one tolerates in packet delivery, as long as there is no packet loss. (A switch will be
able to drop packets; it probably won't do it, but the frequency if you've done your homework likely will be about as service affecting in terms of frequency as bit errors in a well maintained PDH/SDH circuit.) The core question of course is where the queues are. The synchronous case will not admit packets/frames at edge faster than forwardable end-to-end, whereas the switched Ethernet will accept line rate regardless of provisioning level upstream. As long as data rates are compatible with system bandwidth the Ethernet will be as deterministic as the synchronous net.
Where Ethernet (or any hop-by-hop technology with varying speeds) has problems, is when line speed is decreasing. If you're feeding your wiring closet switch with 10GE and it's got data hungry clients connected at 1GE, and the server is connected to the core switch in the machine room with 10GE (all-in-all not an unrealistic situation) the quality of the wiring closet switch is going to be much more critical than the core switch. Because the downconversion is going to create a queue of packets waiting to go out on the slow interface. Such queues must use special memory called (
TCAM) in order to not slow down the packet forwarding. If the TCAM buffers are full, the switch
will drop outgoing packets. Such memory is expensive which means it is a scarce resource, and needs careful management. This is one of the things that sets switches apart, and why some switches are more expensive than others.
From this it is trivial to deduce that a single-speed Ethernet is much easier to make almost-deterministic. And, if that was the case, it mostly proves that the original idea would have been feasible, and as has been mentioned, this was already proven by Juniper.