To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are “save the earthworm” charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, “rapture of the nerds”, much public ridicule, etc.
Alas, I have to reject your summary of my position. The situation as I see it:
DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;
They are likely to form from those with the highest estimates of p(DOOM);
Once they exist, they are likely to try and grow, much like all organisations tend to do—wanting attention, time, money and other available resources;
Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.
This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world—depending on their competence—and on to what extent their paranoia turns out to be justified.
However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.
Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.
Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there’s simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.
But I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.
I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
What risks there are, we can collectively do things about.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are “save the earthworm” charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, “rapture of the nerds”, much public ridicule, etc.
A moment’s googling finds this:
http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf
($863 444)
I leave it to readers to judge whether Tim is flogging a dead horse here.
Not the sort of thing that could, you know, give you nightmares?
The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere “existential risk” message.
Alas, I have to reject your summary of my position. The situation as I see it:
DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;
They are likely to form from those with the highest estimates of p(DOOM);
Once they exist, they are likely to try and grow, much like all organisations tend to do—wanting attention, time, money and other available resources;
Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.
This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world—depending on their competence—and on to what extent their paranoia turns out to be justified.
However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.
Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.
Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there’s simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.
But I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
Ok, I see. Well, that’s just a big factual disagreement then.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
Agreed that there are vested interests potentially biasing reasoning.