I don’t quite understand Goertzel’s position on the “big scary idea”. He appears to accept that
“(2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it.”
and even goes as far as to say that (3) is “almost obvious”.
Does he believe that he understands the issues well enough that he can be almost certain that his particular model for AI will trigger the “good” kind of intelligence explosion?
Or does he accept that there’s a significant probability this project might “destroy everything we value” but not understand why anyone might be alarmed at this?
Or does he think that someone is going to make a human-level AI anyway and that his one has the best chance of creating a good intelligence explosion instead of a bad one?
Or something else which doesn’t constitute a big scary idea?
(btw I’m not entirely sold on this particular way of framing the argument, just trying to understand what Goertzel is actually saying)
I don’t quite understand Goertzel’s position on the “big scary idea”. He appears to accept that
“(2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it.”
and even goes as far as to say that (3) is “almost obvious”.
Does he believe that he understands the issues well enough that he can be almost certain that his particular model for AI will trigger the “good” kind of intelligence explosion?
Or does he accept that there’s a significant probability this project might “destroy everything we value” but not understand why anyone might be alarmed at this?
Or does he think that someone is going to make a human-level AI anyway and that his one has the best chance of creating a good intelligence explosion instead of a bad one?
Or something else which doesn’t constitute a big scary idea?
(btw I’m not entirely sold on this particular way of framing the argument, just trying to understand what Goertzel is actually saying)