Why OpenAI is a bad, bad idea

Or maybe it's that all the available ideas are bad and its the best of them?

I don't know, but it's been baffling to me that Elon Musk, who (correctly) thinks AI is potentially more dangerous than nuclear weapons has went on to start the OpenAI project.

When Google Maps tells people to drive off cliffs, Google quietly patches the program. AIs that are more powerful than us may not need to accept our patches, and may actively take action to prevent us from patching them. If an alien species showed up in their UFOs, said that they’d created us but made a mistake and actually we were supposed to eat our children, and asked us to line up so they could insert the functioning child-eating gene in us, we would probably go all Independence Day on them; because of computers’ more goal-directed architecture, they would if anything be more willing to fight such changes.

Should AI Be Open?
I. H.G. Wells’ 1914 sci-fi book The World Set Free did a pretty good job predicting nuclear weapons:They did not see it until the atomic bombs burst in their fumbling hands…before the l…

  1. I'm the Netscape navigator in book form

  2. I'm believe that you are being a bit anthropomorphic here. Man, knowing Man, fears the ai will be a digital man. While I agree, there is still some benefit of the doubt…..I hope. It's happening, so we should attempt to add a heart to its digital brain…….while we still can.

  3. Interesting take with some good points. Personally I think a containment strategy for information isn't as an effective of an approach as it used to be. Barriers to entry in the field may also have more negative impact on good actors than the bad ones. That's the hope at least…

Leave a Reply