A friend who works for an ISP / Orange, told me all it's because of all the stupif people whi insist that their life depends on 4K and Netflix. Huge B/W required for that, everybody connecting at the same time in the evening... so the ISPs, all of them here, prioritze their B/W to serve all the Netflix and television / streaming services, to make sure they don't drop a frame or anything to make all the Tv and Series addicts happy... and the people surfing the web get whatever is left of the B/W.. ie. they get squat, way, way, wayyyyyy less than the high speed the y paid for and were promised !
This is quite typical for southern Europe. As people, for various reasons, started to work from home a couple years ago, the ISP's in southern/central Europe discovered what careless oversubscription in backbone and distribution will do to your network when people actually insist on using it.
Some very instructive lessons were learned by all involved.
Up here, we mostly survived unscathed, for it was much better built. I was present at a "crisis" meeting of engineers from all the big ISP's and quite a few of the small ones when usage patterns really started to change. We concluded that our loads were all in all manageable. A bit more traffic in the day, some, but not much, cutting of peaks in the new-traditional "hot hours" from 19 to 23 at night, and no real problems. Hardest problem was chipageddon, actually. (which is on-topic, so, there!)
What matters for the heavy streaming loads is exactly what the distribution network looks like.
The transit/backbone side of things is a much smaller consideration, with the proviso that the ISP is big enough for the streamers (Netflix et al.) to take notice of. If you're big enough to be interesting then the streamers will actually co-locate caching/distribution boxes on your network taking a lot of load off transit and peering circuits. This started with Inktomi back in the 1990s, who would put a CDN node in any ISP that would have them. Inktomi are gone and forgotten, but I still have several really good nylon shoulder bags with Inktomi logos embroidered on them that lasted longer then the company did. For the really big telcos the likes of Apple and Google will also colocate boxes to terminate traffic within the telco/ISP network as far as possible rather than backhauling it to their own data centres
Where it gets hairy is what the distribution network looks like. ADSL and VDSL networks when they were built out had fairly high contention ratios where the ADSL/VDSL modems met the backhaul to the core networks, BT's original consumer ADSL2 network typically had a 40:1 contention ratio at the exchange->backhaul interface and their older ADSL1 network was 100:1 - with 100 customers all sharing a single 2Mbit E1 backhaul.
Things have got much better now that fibre to the cabinet is the typical lowest common denominator, with a cabinet having fibre backhaul it's been easy to upgrade them. One street cabinet will serve perhaps 50-100 end customers and 10G or 40G backhaul is now common. Add in some statistical multiplexing/diversity and 10G will serve the actual demand from 100 customers quite nicely, they each get a nominal 100Mbit chunk and can burst to 300Mbit without causing trouble. 40G will suffice to serve PON at 1Gbit for a while, but don't be surprised if 1G becomes more common that things slow down for a while because I suspect that might need some 'fork-list upgrades' as 40G was the fastest that I think most telco's cabinet switches of the last generation were capable of.
Back on the old ADSL distribution systems the aggregation routers at the exchanges were about as dumb as they could be and still do their jobs, any quality of service processing was a pipedream. The newer FTTC cabinet located gear looks more like software defined networking switches and are even capable of inspecting packets inside L2TP tunnels and doing useful quality of service processing to keep latency down for real time traffic like video conferencing while still providing useful burst bandwidth at higher latency for streaming and the like. It was also a prerequisite for everything to switch to VOIP from copper for fixed-line telephony, no way were they going to risk that without the ability to control the latency of their own telephone traffic.
It's the older rural and rural-ish stuff, and low income neighbourhoods that causes headaches, and I suspect that Vince may fall into this hole. They don't want to go to the expense of the fibre rollouts that 'modern' broadband needs, when a fibre pull to a cabinet may only be serving a handful of dwellings - lots of capital expenditure for a handful of customers. So you get what Vince's situation sounds like where you're still on copper for 5km or more to get to a place where enough customer's circuits can be concentrated into that they deem it economic to pull fibre to.
Of course the real solution to streaming congestion existed a long time ago, native multicast. At the LINX we ran an experimental native multicast peering exchange and we even had an early version of BBC iPlayer running over it. But although it worked well enough we never managed to attract critical mass to it.
With native multicast your local machine tells its upstream router that it wants to subscribe to a particular stream, that router tells the router upstream from it and so on. So if five households on the same street cabinet want to watch the same stream only one copy of that stream goes to the cabinet, and then five copies of that stream are sent onward - four streams worth of bandwidth on the cabinet's backhaul are freed up compared to the current situation of five whole identical streams going all the way through the system. The same savings can be made at each level of router hierarchy.
Something very similar is how the streamers concentrate traffic at the ISP if they have a co-located server there. One copy is hauled into the ISP over transit or peering, but multiple copies are forwarded on to the consumers, but without the benefit of multicast at the intermediate stages of the ISP network. So 100s or thousands or 10s of thousands of copies of the same stream may all be being pushed side by side through the ISPs distribution network. If you've ever started streaming something and it has taken 15 seconds to start, it's the traffic concentrating box waiting for a slot in the hope that another customer who also wants to watch it comes along, and it can haul one stream for the two of you back from wherever it starts from (there are also technical reasons why even if it already has the stream it pays to play it out simultaneously to as many customers as possible [avoiding disk reads, buffer copies etc.]).