I can't explain it either. It's not something I've seen before that so many drive fails at once. I have witnessed several times, that during a rebuild of an array another drive failing, being caused by the heavy operations on the drive during rebuild (depend in the type of array that is).
A virus seems unlikely. Data corruption, depends on what data corruption. If a drive is faulty due to bad blocks for example, then it's like fzabkar already pointed out: The array will be in degraded mode. Sure.. in a RAID 6 array, when more then 2 drives fail, the whole array fails (However the remaining disks itself should still be ok). In this case, the storage pools are used for storing backups. So if a a file is corrupted, and is written to disk, that won't harm the disk itself (but the question is then: how got the file corrupted in the first place?)
Spike on a voltage rail, or a dip in the voltage rail, that could cause some strange behaviour. But what I now understand from fzabkar, is that this is not so easy, since components prevent at least at disk level. But what happens if a voltage rails has a large dip, or the voltage rail is very noisy. That has then to be on the main voltage rail I guess. That could be the power supplies. In this case, the power supply are redundant. And if there is a something wrong with those, I expect to see a alert, that a power supply failed (or both) in which case, the cause will be very obvious.