The HPC mob are strange, and push computing in all sorts of ways, and eventually conventional computing catches up. They achieve throughput (not speed) by having embarassingly parallel problem sets, and it will be a long time before desktop computing is like that. A core prerequisite is applications and parts of the operating system being (re)coded in languages which presume multicore parallelism - and that excludes C/C++.
They usually write things in FORTRAN. Or, perhaps Java now. When I worked at the HPC centre in Stockholm, we had one computer whose work was limited by the network bandwidth between 2 universities, because it sat on one of them, but the user base, and the home directories (AFS) they used to post their work in were on the other. Their driver scripts usually were in Perl.
Also, they build machines (ie. clusters) in different ways depending on workload. A task that can be easily subdivided into monotonic calculations (SETIatHome stuff) is easy on the interconnect between computers, but things like fluid dynamics or combustion chamber modelling (diesel engines) consist of calculations whose intermediate results influence and are influenced by other parts of the job. The partitioning into parallel tasks is done in a geometrical grid over the space modelled ("You shall make computations on the pressure/temperature/turbulence/whatever on this mm
2"), and thus interdependences are created.
The former type of workload is easily solved with 10G Ethernet, which was 1G Ethernet when I worked there, but the latter needs lower latency still in interconnect. Back then Infiniband was very much popular. Last computer they bought is a Cray with a proprietary interconnect, but they're looking for a new one now. As can be expected, it is not so much the bandwidth but the latency that are the crucial specs. A bad case for the "C5 Galaxy full of tapes" type of transport.