re machine translations: there is a lot of effort involved in creating a suitable translation model.
And this effort does not come for free.
Also, there are 3 major classes of MT: rule based, statistical, plus neural. They all have their advantages and disadvantages.
My guess is that neither Farnell nor Mouser put a lot of effort into creating the proper custom dictionaries which are a prereq to proper training. Also they would need a lot of so called in-domain data to translate their catalogs appropriately. In-domain meaning: in the language domain, e.g. electronics, electrical engineering, ....
A lot means: if you have a proper general purpose model, you will need to inject your training data and train it for several generations. Training data needs to be at least 100 000 aligned segments (a segment is a phrase at minimum, better yet a whole sentence, and it needs to be aligned (i.e. word mapped to word or expression).
100k segments are the minimum, we recommend 1 M and above. To give you an example: for English to French we have a segment pool of about 275 M segments, pull a random 10 M and create the general purpose corpus. Train it for at least 6 generations, injecting fresh segments.
You would repeat this for specialized language models, and you may want to book your GPU cluster for the next couple of weeks.
Actually I am looking forward to Ampere, to boost it a bit, and I am also looking at CNN on Alveo for this specific topic.
Point is: we currently spend a nice sum plus 2 System Engineers on our machine translation system. We are doing a large five figure amount of translations a day, and we are receiving good feedback. Up to the point where a country directory picks up the phone and calls me directly if it is not running. And we are asking for more support in the form of computer linguists to help us with adaptations.
I doubt that Mouser and Farnell are this committed.
If they are and this is the outcome, they need to start over.