The problem is that it’s in a movie and smart people are therefore liable not to take it seriously.
Global warming and asteroidimpacts, are also in movies, specifically in disaster movies which, by genre convention, are scientifically inaccurate and transparently exaggerate the risks they portray for the sake of drama and action sequences.
And yet, smart people haven’t stopped taking seriously these risks.
I think it’s the other way round: AIs going rogue and wreaking havoc are a staple of science fiction. Pretty much all sci-fi franchises featuring AIs I can thing of, make use of that trope sooner or later. Skynet is the prototypical example of the UFAI MIRI worry about.
So we have a group of sci-fi geeks with little or no actual expertise in AI research or related topics who obsess over a risk that occurs over and over in sci-fi stories. Uhm, I wonder where they got the idea from.
Meanwhile, domain experts, who are generally also sci-fi geeks and übernerds but have a track record of actual achievements, acknowledge that the safety risks may exist, but think that extreme apocalyptic scenarios are improbable, and standard safety engineering principles are probably enough to deal with realistic failure modes, at least at present and foreseeable technological levels.
Yup, you may well be right: maybe the MIRI folks have the fears they do because they’ve watched too many science-fiction movies.
Look at what just happened: a very smart person (I assume you are very smart; I haven’t made any particular effort to check) observed that MIRI’s concern looks like it stepped out of a science-fiction movie, used that observation as part of an argument for dismissing that concern, and did so without any actual analysis of the alleged dangers or the alleged ways of protecting against them. Bonus points for terms like “extreme” and “apocalyptic”, which serve to label something as implausible simply on the grounds that it sounds, well, extreme.
The heuristic you’ve used here isn’t a bad one—which is part of why very smart people use it. And, as I say, it may well be correct in this instance. But it seems to me that your ability to say all those things, and their plausibility, their nod-along-wisely-ness, is pretty much independent of whether, on close examination, MIRI’s concerns turn out to be crazy paranoid sci-fi-geek silliness, or carefully analysed real danger.
Which illustrates the fact that, as I said before,
someone intelligent and lucky might well think of the argument, but then dismiss it because it feels silly on account of resembling “OMG if we build an AI it’ll turn into Skynet and we’ll all die”.
and the fact that the argument could be right despite their doing so.
As I wrote in the first part of my previous comment, the fact that some risk is portrayed in Hollywood movies, in the typical overblown and scientifically inaccurate way Hollywood movies are done, it’s not enough to drive respectable scientists away.
As for MIRI, well, it’s certainly possible that a group of geeks without relevant domain expertise get an idea from sci-fi that experts don’t take very seriously, start thinking very hard on it, and then come up with some strong arguments for it that had somehow eluded the experts so far. It’s possible but it’s not likely. But since any reasonable prior can be overcome by evidence (or arguments in this case), I would change my beliefs if MIRI presented a compelling argument for their case. So far, I’ve seen lots of appeal to emotion (“it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.”) but not technically arguments: the best they have seem to be some rehashing of Good’s recursive self-improvement argument from 50 years ago (which might have intuitively made sense back then, in the paleolithic era of computer science, but is unsubstantiated and frankly hopelessly naive in the face of modern theoretical and empirical knowledge), coupled with highly optimistic estimates of the actual power that intelligence entails.
Then there is a second question: even assuming that MIRI isn’t tilting at windmills, and so the AI risk is real and experts underestimate it, is MIRI doing any good about it? Keep in mind that MIRI solicits donations (“I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and can donate a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute[MIRI].”) Does any dollar donated to MIRI decrease the AI risk, increase it, or does it have a negligible effect? MIRI won’t reveal the details of what they are working on, claiming that if somebody used the results of their research unwisely it could hasten the AI apocalypse, which means that even them think they playing with fire. And in fact, from what they let out, their general plan is to build a provably “friendly”(safe) super-intelligent AI. History of engineering is littered of “provably” safe/secure designs that failed miserably, therefore this doesn’t seem an especially promising approach.
When estimating the utility of MIRI work, and therefore the utility of donating them money, or having tech companies spend time and effort interacting with them, evaluating their expertise becomes paramount, since we can’t directly evaluate their research, particularly because it is deliberately concealed. The fact that they have no track record of relevant achievements, and they might have well taken their ideas from sci-fi, is certainly not a piece of evidence in favour of their expertise.
For the avoidance of doubt, I am not arguing that MIRI’s fears about unfriendly AI are right (nor that they aren’t); just saying why it’s somewhat credible for them to think that someone could be clever enough to make an AGI might still not appreciate the dangers.
Global warming and asteroid impacts, are also in movies, specifically in disaster movies which, by genre convention, are scientifically inaccurate and transparently exaggerate the risks they portray for the sake of drama and action sequences.
And yet, smart people haven’t stopped taking seriously these risks.
I think it’s the other way round: AIs going rogue and wreaking havoc are a staple of science fiction. Pretty much all sci-fi franchises featuring AIs I can thing of, make use of that trope sooner or later. Skynet is the prototypical example of the UFAI MIRI worry about.
So we have a group of sci-fi geeks with little or no actual expertise in AI research or related topics who obsess over a risk that occurs over and over in sci-fi stories. Uhm, I wonder where they got the idea from.
Meanwhile, domain experts, who are generally also sci-fi geeks and übernerds but have a track record of actual achievements, acknowledge that the safety risks may exist, but think that extreme apocalyptic scenarios are improbable, and standard safety engineering principles are probably enough to deal with realistic failure modes, at least at present and foreseeable technological levels.
Which group is more likely to be correct?
I find myself wanting to make two replies.
Yup, you may well be right: maybe the MIRI folks have the fears they do because they’ve watched too many science-fiction movies.
Look at what just happened: a very smart person (I assume you are very smart; I haven’t made any particular effort to check) observed that MIRI’s concern looks like it stepped out of a science-fiction movie, used that observation as part of an argument for dismissing that concern, and did so without any actual analysis of the alleged dangers or the alleged ways of protecting against them. Bonus points for terms like “extreme” and “apocalyptic”, which serve to label something as implausible simply on the grounds that it sounds, well, extreme.
The heuristic you’ve used here isn’t a bad one—which is part of why very smart people use it. And, as I say, it may well be correct in this instance. But it seems to me that your ability to say all those things, and their plausibility, their nod-along-wisely-ness, is pretty much independent of whether, on close examination, MIRI’s concerns turn out to be crazy paranoid sci-fi-geek silliness, or carefully analysed real danger.
Which illustrates the fact that, as I said before,
and the fact that the argument could be right despite their doing so.
As I wrote in the first part of my previous comment, the fact that some risk is portrayed in Hollywood movies, in the typical overblown and scientifically inaccurate way Hollywood movies are done, it’s not enough to drive respectable scientists away.
As for MIRI, well, it’s certainly possible that a group of geeks without relevant domain expertise get an idea from sci-fi that experts don’t take very seriously, start thinking very hard on it, and then come up with some strong arguments for it that had somehow eluded the experts so far. It’s possible but it’s not likely.
But since any reasonable prior can be overcome by evidence (or arguments in this case), I would change my beliefs if MIRI presented a compelling argument for their case.
So far, I’ve seen lots of appeal to emotion (“it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.”) but not technically arguments: the best they have seem to be some rehashing of Good’s recursive self-improvement argument from 50 years ago (which might have intuitively made sense back then, in the paleolithic era of computer science, but is unsubstantiated and frankly hopelessly naive in the face of modern theoretical and empirical knowledge), coupled with highly optimistic estimates of the actual power that intelligence entails.
Then there is a second question: even assuming that MIRI isn’t tilting at windmills, and so the AI risk is real and experts underestimate it, is MIRI doing any good about it?
Keep in mind that MIRI solicits donations (“I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and can donate a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute[MIRI].”)
Does any dollar donated to MIRI decrease the AI risk, increase it, or does it have a negligible effect?
MIRI won’t reveal the details of what they are working on, claiming that if somebody used the results of their research unwisely it could hasten the AI apocalypse, which means that even them think they playing with fire.
And in fact, from what they let out, their general plan is to build a provably “friendly”(safe) super-intelligent AI. History of engineering is littered of “provably” safe/secure designs that failed miserably, therefore this doesn’t seem an especially promising approach.
When estimating the utility of MIRI work, and therefore the utility of donating them money, or having tech companies spend time and effort interacting with them, evaluating their expertise becomes paramount, since we can’t directly evaluate their research, particularly because it is deliberately concealed.
The fact that they have no track record of relevant achievements, and they might have well taken their ideas from sci-fi, is certainly not a piece of evidence in favour of their expertise.
For the avoidance of doubt, I am not arguing that MIRI’s fears about unfriendly AI are right (nor that they aren’t); just saying why it’s somewhat credible for them to think that someone could be clever enough to make an AGI might still not appreciate the dangers.