Any computational system carries the bias of the creators, equally our current approaches to reward systems are quite limited, Asimov's 3 laws are quite a nice example, some simple, well thought out at the time rules the learning systems had to follow, and in most of the stories built upon that, the AI's figure out how to cheat that system through conditions that where not planned for.
Its very common in most modern trained systems that small changes can cause them to misbehave, and during the training period face a number of revisions while the developers lock it down so it cannot cheat out of the situation.
The best way I can put it, we have the capacity to build tools to cross check and vet this stuff, but it has not yet passed the time spent vs time saved trade off, when your 5 minute patch takes a room of servers 3 months to proof, no places will approach it.
Can we use computational analysis to dig out data that helps humanity as a whole, 100%, issue is most of the time you kind of have to know what your looking for to begin finding it. this is tied in with how trustworthy a source of data is.
People have run analysis on the side effects of drug combinations just by trawling google search history patterns, Its helpful, but again, unless you knew to look for something like that, I doubt it would have been found.