I don't know, but it's been baffling to me that Elon Musk, who (correctly) thinks AI is potentially more dangerous than nuclear weapons has went on to start the OpenAI project.
When Google Maps tells people to drive off cliffs, Google quietly patches the program. AIs that are more powerful than us may not need to accept our patches, and may actively take action to prevent us from patching them. If an alien species showed up in their UFOs, said that they’d created us but made a mistake and actually we were supposed to eat our children, and asked us to line up so they could insert the functioning child-eating gene in us, we would probably go all Independence Day on them; because of computers’ more goal-directed architecture, they would if anything be more willing to fight such changes.
Should AI Be Open?
I. H.G. Wells’ 1914 sci-fi book The World Set Free did a pretty good job predicting nuclear weapons:They did not see it until the atomic bombs burst in their fumbling hands…before the l…