You seem to be assigning probabilities to this—as though it is a well defined idea—but what is it supposed to mean?
I know (I don’t), but since I asked Rain to assign probabilities to it I felt that I had to state my own as well. I asked him to do so because I read that some people are arguing in favor of making probability estimates, to say a number. But since I haven’t come across much analysis that actually does state numbers I thought I’d ask a donor who contributed the current balance of his bank account.
Well, bypassing the issue of FOOMingness, I am pretty sure that machine intelligence represents an upcoming issue that could go better or worse than average—and which humanity should try and steer in a positive direction—x-risk or no.
My concerns about the SIAI are mostly about their competence. It seems rather easy for me to imagine another organisation in the SIAI’s niche doing a much better job. Are 63 chapters of a Harry Potter fanfic really helping, for instance?
Also, if they think using fear of THE END OF THE WORLD is a good way to stimulate donations, I would be very interested to see information about the effect on society of such marketing. Will it produce a culture of fear? What about The risks of caution?
My general impression is that spreading the DOOM virus around is rarely very constructive. It may well be actively harmful. In financial markets, prophesying market crashes may actually help make them happen, since the whole system works like a big rumour mill—and if a crash is coming, it makes sense to cash in and buy gold—and, if everyone does that, then the crash happens. A case of the self-fulfilling prophesy. The prophet may look smug—but if only they had kept their mouth shut!
Have the DOOM merchants looked into this kind of thing? Where are their reassurances that prophesying DOOM—and separating passing punters from their cash in the process—is a harmless pass-time, with no side effects?
My concerns about the SIAI are mostly about their competence. It seems rather easy for me to imagine another organisation in the SIAI’s niche doing a much better job. Are 63 chapters of a Harry Potter fanfic really helping, for instance?
That isn’t an SIAI thing, that’s Eliezer’s thing. But if you really want to know, it seems from anecdotal evidence that HPMR is helping raise the general sanity waterline. Not only has it made more people be interested in LW in general, I can personally attest to it helping modify irrational beliefs that friends have had.
(Also, Tim, I know you are very fond of capitalizing “DOOM” and certain other phrases, but the rest of us find it distracting and disruptive. Could you please consider not doing it here?)
Also, if they think using fear of THE END OF THE WORLD is a good way to stimulate donations, I would be very interested to see information about the effect on society of such marketing. Will it produce a culture of fear? What about The risks of caution?
I’m not sure why you think they think that doomsday predictions are a good way to stimulate donations. They are simply being honest in their goals. Empirically, existential risk is not a great motivator for getting money. Look for example at how much trouble people concerned with asteroid impacts have getting money (although now that the WISE survey is complete we’re in much better shape in understanding and handling that risk.)
My general impression is that spreading the DOOM virus around is rarely very constructive. It may well be actively harmful.
So should people not say what they are honestly thinking?
In financial markets, prophesying market crashes may actually help make them happen, since the whole system works like a big rumour mill—and if a crash is coming, it makes sense to cash in and buy gold—and, if everyone does that, then the crash happens. A case of the self-fulfilling prophesy. The prophet may look smug—but if only they had kept their mouth shut!
Yes, that can happen in markets. What is the analogy here? Is there a situation where simply talking about the risk of unFriendly AI will somehow make unFriendly AI more likely? (And note, improbable decision-theory basilisks don’t count.)
Have the DOOM merchants looked into this kind of thing? Where are their reassurances that prophesying DOOM—and separating passing punters from their cash in the process—is a harmless pass-time, with no side effects?
If your standard is that they have to be clear there are no side effects, that’s a pretty high standard. How certain do they need to be? To return to the asteroid example, thanks to the WISE mission we now are tracking about 95% of all asteroids that could pose an extinction threat if they impacted, and are tracking a much higher percentage of those that live in severely threatening orbits. But, whenever we spend any money it means we might be missing that small percentage. We’ll feel really stupid if our donations to any cause turn out not to matter because we missed another one. If a big asteroid hits the Earth tomorrow we’ll feel really dumb. By the same token, we’ll feel really stupid if tomorrow someone makes an approximation of AIXI devoted to playing WoW that goes foom. The fact that we have the asteroids charted won’t make any difference. No matter how good an estimate we do, there’s a chance we’ll be wrong. And no matter what happens there are side effects, simply due at minimum to the fact that we have a finite set of resources. And the more we talk about any issue the less we are focusing on others. And yes, obviously if fooming turns out to not be an issue, there will be negative side effects. So where is the line?
if you really want to know, it seems from anecdotal evidence that HPMR is helping raise the general sanity waterline
I haven’t looked—but it seems to be pretty amazing behaviour to me.
I’m not sure why you think they think that doomsday predictions are a good way to stimulate donations.
Using threats of the apocalypse is an ancient method, used by religions and cults for centuries.
Look for example at how much trouble people concerned with asteroid impacts have getting money
Their smallish p(DOOM) values probably don’t help too much.
My general impression is that spreading the DOOM virus around is rarely very constructive. It may well be actively harmful.
So should people not say what they are honestly thinking?
It is up to the people involved if they want to dabble in harmful self-fulfilling prophesies. Maybe society should reward them less and ignore them more, though. I figure, if we study the DOOM merchants more scientifically, we will have a better understanding of the risks and problems they cause—and what we should do about them.
Most people already have a pretty high barrier against END OF THE WORLD schemes. It is such an obvious and well-worn routine. However, it appears that not everyone has been immunised.
What is the analogy here. Is there a situation where simply talking about the risk of unFriendly AI will somehow make unFriendly AI more likely?
Ideally, DOOM SOON should sharpen our wits, and make us more vigilant and secure. However, the opposite response seems quite likely: DOOM SOON might make people feel despair, apathy, helplessness, futility and depression. Those things could then go on to cause a variety of problems. Most of them are not to do with intelligent machines—though the one I already mentioneddoes involve them.
Have the DOOM merchants looked into this kind of thing? Where are their reassurances that prophesying DOOM—and separating passing punters from their cash in the process—is a harmless pass-time, with no side effects?
If your standard is that they have to be clear there are no side effects, that’s a pretty high standard.
Sure. Doing more good than harm would be a nice start. I don’t know what the side effects of DOOM-mongering are—in detail, so it is hard to judge the scale of the side effects—besides the obvious financial losses among those involved. Probably, the most visible behaviour of the afflicted individuals is that they start flapping their hands and going on about DOOM—spreading the meme after being infected by it. To what extent this affects their relationships, work, etc. is not entirely clear. I would be interested in finding out, though.
I know (I don’t), but since I asked Rain to assign probabilities to it I felt that I had to state my own as well. I asked him to do so because I read that some people are arguing in favor of making probability estimates, to say a number. But since I haven’t come across much analysis that actually does state numbers I thought I’d ask a donor who contributed the current balance of his bank account.
Well, bypassing the issue of FOOMingness, I am pretty sure that machine intelligence represents an upcoming issue that could go better or worse than average—and which humanity should try and steer in a positive direction—x-risk or no.
My concerns about the SIAI are mostly about their competence. It seems rather easy for me to imagine another organisation in the SIAI’s niche doing a much better job. Are 63 chapters of a Harry Potter fanfic really helping, for instance?
Also, if they think using fear of THE END OF THE WORLD is a good way to stimulate donations, I would be very interested to see information about the effect on society of such marketing. Will it produce a culture of fear? What about The risks of caution?
My general impression is that spreading the DOOM virus around is rarely very constructive. It may well be actively harmful. In financial markets, prophesying market crashes may actually help make them happen, since the whole system works like a big rumour mill—and if a crash is coming, it makes sense to cash in and buy gold—and, if everyone does that, then the crash happens. A case of the self-fulfilling prophesy. The prophet may look smug—but if only they had kept their mouth shut!
Have the DOOM merchants looked into this kind of thing? Where are their reassurances that prophesying DOOM—and separating passing punters from their cash in the process—is a harmless pass-time, with no side effects?
That isn’t an SIAI thing, that’s Eliezer’s thing. But if you really want to know, it seems from anecdotal evidence that HPMR is helping raise the general sanity waterline. Not only has it made more people be interested in LW in general, I can personally attest to it helping modify irrational beliefs that friends have had.
(Also, Tim, I know you are very fond of capitalizing “DOOM” and certain other phrases, but the rest of us find it distracting and disruptive. Could you please consider not doing it here?)
I’m not sure why you think they think that doomsday predictions are a good way to stimulate donations. They are simply being honest in their goals. Empirically, existential risk is not a great motivator for getting money. Look for example at how much trouble people concerned with asteroid impacts have getting money (although now that the WISE survey is complete we’re in much better shape in understanding and handling that risk.)
So should people not say what they are honestly thinking?
Yes, that can happen in markets. What is the analogy here? Is there a situation where simply talking about the risk of unFriendly AI will somehow make unFriendly AI more likely? (And note, improbable decision-theory basilisks don’t count.)
If your standard is that they have to be clear there are no side effects, that’s a pretty high standard. How certain do they need to be? To return to the asteroid example, thanks to the WISE mission we now are tracking about 95% of all asteroids that could pose an extinction threat if they impacted, and are tracking a much higher percentage of those that live in severely threatening orbits. But, whenever we spend any money it means we might be missing that small percentage. We’ll feel really stupid if our donations to any cause turn out not to matter because we missed another one. If a big asteroid hits the Earth tomorrow we’ll feel really dumb. By the same token, we’ll feel really stupid if tomorrow someone makes an approximation of AIXI devoted to playing WoW that goes foom. The fact that we have the asteroids charted won’t make any difference. No matter how good an estimate we do, there’s a chance we’ll be wrong. And no matter what happens there are side effects, simply due at minimum to the fact that we have a finite set of resources. And the more we talk about any issue the less we are focusing on others. And yes, obviously if fooming turns out to not be an issue, there will be negative side effects. So where is the line?
I haven’t looked—but it seems to be pretty amazing behaviour to me.
Using threats of the apocalypse is an ancient method, used by religions and cults for centuries.
Their smallish p(DOOM) values probably don’t help too much.
It is up to the people involved if they want to dabble in harmful self-fulfilling prophesies. Maybe society should reward them less and ignore them more, though. I figure, if we study the DOOM merchants more scientifically, we will have a better understanding of the risks and problems they cause—and what we should do about them.
Most people already have a pretty high barrier against END OF THE WORLD schemes. It is such an obvious and well-worn routine. However, it appears that not everyone has been immunised.
Ideally, DOOM SOON should sharpen our wits, and make us more vigilant and secure. However, the opposite response seems quite likely: DOOM SOON might make people feel despair, apathy, helplessness, futility and depression. Those things could then go on to cause a variety of problems. Most of them are not to do with intelligent machines—though the one I already mentioned does involve them.
Sure. Doing more good than harm would be a nice start. I don’t know what the side effects of DOOM-mongering are—in detail, so it is hard to judge the scale of the side effects—besides the obvious financial losses among those involved. Probably, the most visible behaviour of the afflicted individuals is that they start flapping their hands and going on about DOOM—spreading the meme after being infected by it. To what extent this affects their relationships, work, etc. is not entirely clear. I would be interested in finding out, though.