Human intelligence scares me as well. For me it is kind of zynical when people say something like "it's scary when AI's have to decide over life and death situations". Now just think about the biggest idiots you know and how you feel about them being in charge of a life and death situation. We can put failsafes in AI, we can't put failsafes in Human minds (yet). What I will closely watch is the implementation in jurisdiction for those kinds of things like autonomous programs and more complex AI, as that is where we need to have the basic foundation. The actualy implementation is important as well but that one is kind of obvious. Besides a lot of things are already controlled by some kind of automatic systems in a lot of things like aircraft, machinery etc.