Nick Bostrom’s TED talk on Superintelligence is now online
http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
I realize this might go into a post in a media thread, rather than its own topic, but it seems big enough, and likely-to-prompt-discussion enough, to have its own thread.
I liked the talk, although it was less polished than TED talks often are. What was missing I think was any indication of how to solve the problem. He could be seen as just an ivory tower philosopher speculating on something that might be a problem one day, because apart from mentioning in the beginning that he works with mathematicians and IT guys, he really does not give an impression that this problem is already being actively worked on.
This is my first comment on LessWrong.
I just wrote a post replying to part of Bostrom’s talk, but apparently I need 20 Karma points to post it, so… let it be a long comment instead:
Bostrom should modify his standard reply to the common “We’d just shut off / contain the AI” claim
In Superintelligence author Prof. Nick Bostrom’s most recent TED Talk, What happens when our computers get smarter than we are?, he spends over two minutes replying to the common claim that we could just shut off an AI or preemptively contain it in a box in order to prevent it from doing bad things that we don’t like, so there’s no need to be too concerned about the possible future development of AI that has misconceived or poorly specified goals:
If I recall correctly, Bostrom has replied to this claim in this manner in several of the talks he has given. While what he says is correct, I think that there is a more important point he should also be making when replying to this claim.
The point is that even if containing an AI in a box so that it could not escape and cause damage was somehow feasible, it would still be incredibly important for us to determine how to create AI that shares our interests and values (friendly AI). And we would still have great reason to be concerned about the creation of unfriendly AI. This is because other people, such as terrorists, could still create an unfriendly AI and intentionally release it into the world to wreak havoc and potentially cause an existential catastrophe.
The idea that we should not be too worried about figuring out how to make AI friendly because we could always contain the AI in a box until we knew it was safe to release is confused not primarily because we couldn’t actually successfully contain it in the box, but rather because the primary reason we have for wanting to quickly figure out how to make a friendly AI is so that we can make a friendly AI before anyone else makes an unfriendly AI.
In his TED Talk, Bostrom continues:
Bostrom could have strengthened his argument for the position that there is no way around this difficult problem by stating my point above.
That is, he could have pointed out that even if we somehow developed a reliable way to keep a superintelligent genie locked up in its bottle forever, this still would not allow us to avoid having to solve the difficult problem of creating friendly AI with human values, since there would still be a high risk that other people in the world with not-so-good intentions would eventually develop an unfriendly AI and intentionally release it upon the world, or simply not exercise the caution necessary to keep it contained.
Once the technology to make superintelligent AI is developed, good people will be pressured to create friendly AI and let it take control of the future of the world ASAP. The longer they wait, the greater the risk that not-so-good people will develop AI that isn’t specifically designed to have human values. This is why solving the value alignment problem soon is so important.
I’m not sure your argument proves your claim. I think what you’ve shown is that there exist reasons other than the inability to create perfect boxes to care about the value alignment problem.
We can flip your argument around and apply it to your claim: imagine a world where there was only one team with the ability to make superintelligent AI. I would argue that it’ll still be extremely unsafe to build an AI and try to box it. I don’t think that this lets me conclude that a lack of boxing ability is the true reason that the value alignment problem is so important.
I agree that there are several reasons why solving the value alignment problem is important.
Note that when I said that Bostrom should “modify” his reply I didn’t mean that he should make a different point instead of the point he made, but rather meant that he should make another point in addition to the point he already made. As I said:
Ah, I see. Fair enough!
I thought it was excellent, and not at all too ivory tower, although he moved through more inferential steps than in the average TED talk.
I thoroughly enjoyed it and think it was really well done. I can’t perfectly judge how accessible it would be to those unfamiliar with x-risk mitigation and AI, but I think it was pretty good in that respect and did a good job of justifying the value alignment problem without seeming threatening.
I like how he made sure to position the people working on the value alignment problem as separate from those actually developing the potentially-awesome-but-potentially-world-ending AI so that the audience won’t have any reason to not support what he’s doing. I just hope the implicit framing of superintelligent AI as an inevitability, not a possibility, isn’t so much of an inferential leap that it takes people out of reality-mode and into fantasy-mode.
I wouldn’t have been able to guess the date this speech was given. The major outline seems 10 years old.
Is that a problem? Reiterating the basics is always a useful thing, and he didn’t have much more time after doing so.
Excellent layout of the real problem: the control mechanism, rather than the creation of AI itself.
I started this Reddit on Ethereum where AI may be recreated first:
http://www.reddit.com/r/ethereum/comments/3430pz/an_ethereum_enabled_possibility_what_happens_when/
Bostrom is a crypto-creationist “philosopher” with farcical arguments in favor of abrahamic mythology and neo-luddism. People are giving too much credit to lunatics who promote AI eschatology. Please do not listen to schizophrenics like Bostrom. The whole “academic career” of Bostrom may be summarized as “non-solutions to non-problems”. I have never seen a less useful thinker. He could not be more wrong! I sometimes think that philosophy departments should be shut down, if they are to breed this kind of ignorance.
It’s quite ironic that Bostrom is talking about superintelligence, BTW. How will he imagine what intelligent entities think?