It typically has the feature that you, all your relatives, friends and loved-ones die—probably enough for most people to seriously want to avoid it. Michael Vasser talks about “eliminating everything that we value in the universe”.
Maybe better super-stimuli could be designed—but there are constraints. Those involved can’t just make up the apocalypse that they think would be the most scary one.
Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these “hell” memes results in them becoming more prominent—or whether most people just find them too ridiculous to take seriously.
I think you’re trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family.
And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).
And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus...
It doesn’t work. Jehovah’s Witnesses don’t even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive.
People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there’s almost no violence in jails either.
Negatve singularity in my opinion is at least 50 years away.
I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still?
And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).
I’m going to say 75 years for that. But really, this is becoming very much total guesswork.
I do know that AGI -ve singularity won’t happen in the next 2 decades and I think one can bet that it won’t happen after that for another few decades either.
I’m going to say 75 years for that. But really, this is becoming very much total guesswork.
It’s still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the ‘singularity’ step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours!
I do know that AGI -ve singularity won’t happen in the next 2 decades and I think one can bet that it won’t happen after that for another few decades either.
If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)
Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate.
Suffices to say that FAI doesn’t have to come via the expected route of someone inventing AGI and then waiting until they invent “friendliness theory” for it.
Church and cute puppies are likely worse causes, yes. I listed animal charities in my “Bad causes” video.
I don’t have their budget at my fingertips—but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous—but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals.
“3-laws-safe” slogans will be printed. I note that Google’s recent chrome ad was full of data destruction images—and ended with the slogan “be safe”.
Some of this is potentially good. However, some of it isn’t—and is more reminiscent of the Daisy ad.
To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are “save the earthworm” charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, “rapture of the nerds”, much public ridicule, etc.
Alas, I have to reject your summary of my position. The situation as I see it:
DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;
They are likely to form from those with the highest estimates of p(DOOM);
Once they exist, they are likely to try and grow, much like all organisations tend to do—wanting attention, time, money and other available resources;
Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.
This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world—depending on their competence—and on to what extent their paranoia turns out to be justified.
However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.
Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.
Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there’s simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.
But I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.
I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
What risks there are, we can collectively do things about.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
I think your disapproval of animal charities is based on circular logic, or at least an unproven premise.
You seem to be saying that animal causes are unworthy recipients of human effort because animals aren’t humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don’t think it’s proven that people only like animals because the animals are super-stimuli.
I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it’s an effort to assume a universe-eye’s view of what’s ultimately valuable. I’m inclined to trust the former more.
What’s your line of argument for supporting charities that help people?
I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.
Similar values are enshrined in law in most countries.
To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant.
(Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)
I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
And yet this is not contrary to my point. Charity operates, only needs to operate, on areas that laws do not already create a solution for. If there was a law specifying that dying kids get trips to Disneyland and visits by popstars then there wouldn’t be a “Make A Wish Foundation”.
You said the law was “irrelevant”—but there’s a sense in which we can see consensus human values about animals by looking at what the law dictates as punishment for their maltreatment. That is what I was talking about. It seems to me that the law has something to say about the issue of the value of animals relative to humans.
For the most part, animals are given relatively few rights under the law. There are exceptions for some rare ones. Animals are routinely massacred in huge numbers by humans—including some smart mammals like pigs and dolphins. That is a broad reflection how relatively-valuable humans are considered to be.
If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
And once it’s enshrined in law, it no longer matters whether citizens think killing a human is worse or better than killing a dog. I think that is what wedrifid was noting.
You seem to be saying that animal causes are unworthy recipients of human effort because animals aren’t humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don’t think it’s proven that people only like animals because the animals are super-stimuli.
You may be interested in Alan Dawrst’s essays on animal suffering and animal suffering prevention.
I believe the numbers are actually higher than $200,000. SIAI’s 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven’t researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn’t find those numbers; I just e-mailed a couple people at SIAI asking about that.
That being said, even $500,000, while not trivial, seems to me a pretty small budget.
It typically has the feature that you, all your relatives, friends and loved-ones die—probably enough for most people to seriously want to avoid it. Michael Vasser talks about “eliminating everything that we value in the universe”.
Maybe better super-stimuli could be designed—but there are constraints. Those involved can’t just make up the apocalypse that they think would be the most scary one.
Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these “hell” memes results in them becoming more prominent—or whether most people just find them too ridiculous to take seriously.
Yes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)
I think you’re trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family.
And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).
It is also not well-optimized to be believable.
It doesn’t work. Jehovah’s Witnesses don’t even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive.
People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there’s almost no violence in jails either.
I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still?
This got a wry smile out of me. :)
(t(positive singularity) | positive singularity)
I’m going to say 75 years for that. But really, this is becoming very much total guesswork.
I do know that AGI -ve singularity won’t happen in the next 2 decades and I think one can bet that it won’t happen after that for another few decades either.
It’s still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the ‘singularity’ step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours!
If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)
Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate.
Suffices to say that FAI doesn’t have to come via the expected route of someone inventing AGI and then waiting until they invent “friendliness theory” for it.
Church and cute puppies are likely worse causes, yes. I listed animal charities in my “Bad causes” video.
I don’t have their budget at my fingertips—but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous—but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. “3-laws-safe” slogans will be printed. I note that Google’s recent chrome ad was full of data destruction images—and ended with the slogan “be safe”.
Some of this is potentially good. However, some of it isn’t—and is more reminiscent of the Daisy ad.
To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are “save the earthworm” charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, “rapture of the nerds”, much public ridicule, etc.
A moment’s googling finds this:
http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf
($863 444)
I leave it to readers to judge whether Tim is flogging a dead horse here.
Not the sort of thing that could, you know, give you nightmares?
The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere “existential risk” message.
Alas, I have to reject your summary of my position. The situation as I see it:
DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;
They are likely to form from those with the highest estimates of p(DOOM);
Once they exist, they are likely to try and grow, much like all organisations tend to do—wanting attention, time, money and other available resources;
Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.
This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world—depending on their competence—and on to what extent their paranoia turns out to be justified.
However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.
Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.
Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there’s simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.
But I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
Ok, I see. Well, that’s just a big factual disagreement then.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
Agreed that there are vested interests potentially biasing reasoning.
I think your disapproval of animal charities is based on circular logic, or at least an unproven premise.
You seem to be saying that animal causes are unworthy recipients of human effort because animals aren’t humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don’t think it’s proven that people only like animals because the animals are super-stimuli.
I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it’s an effort to assume a universe-eye’s view of what’s ultimately valuable. I’m inclined to trust the former more.
What’s your line of argument for supporting charities that help people?
I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.
To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant.
(Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)
I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
And yet this is not contrary to my point. Charity operates, only needs to operate, on areas that laws do not already create a solution for. If there was a law specifying that dying kids get trips to Disneyland and visits by popstars then there wouldn’t be a “Make A Wish Foundation”.
You said the law was “irrelevant”—but there’s a sense in which we can see consensus human values about animals by looking at what the law dictates as punishment for their maltreatment. That is what I was talking about. It seems to me that the law has something to say about the issue of the value of animals relative to humans.
For the most part, animals are given relatively few rights under the law. There are exceptions for some rare ones. Animals are routinely massacred in huge numbers by humans—including some smart mammals like pigs and dolphins. That is a broad reflection how relatively-valuable humans are considered to be.
And once it’s enshrined in law, it no longer matters whether citizens think killing a human is worse or better than killing a dog. I think that is what wedrifid was noting.
You may be interested in Alan Dawrst’s essays on animal suffering and animal suffering prevention.
I believe the numbers are actually higher than $200,000. SIAI’s 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven’t researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn’t find those numbers; I just e-mailed a couple people at SIAI asking about that.
That being said, even $500,000, while not trivial, seems to me a pretty small budget.
Sorry, yes, my bad. $200,000 is what they spent on their own salaries.