Ok, holy crap. I am going to call this the Really Scary Idea. I had not thought there could be people out there who would actually value being first with the AGI over decreasing the risk of existential disaster, but it is entirely plausible. Thank you for highlighting this for me, I really am grateful. If a little concerned.
Mind projection fallacy, perhaps? I thought the human race was more important than being the guy who invented AGI, so everyone naturally thinks that?
To reply to my own quote, then:
Well, the worst case scenario is that you die in childbirth and take the entire human race with you. That is not something I am comfortable with, regardless of whether you are.
It doesn’t matter what you are comfortable with, if the developer doesn’t have a term in their utility function for your comfort level. Even I have thought similar thoughts with regards to Luddites and such; drag them kicking and screaming into the future if we have to, etc.
I think the best way to think about it, since it helps keep the scope manageable and crystallize the relevant factors, is that it’s not “being first with the AGI” but “defining the future” (the first is the instrumental value, the second is the terminal value). That’s essentially what all existential risk management is about- defining the future, hopefully to not include the vanishing of us / our descendants.
But how you want to define the future- i.e. the most political terminal value you can have- is not written on the universe. So the mind projection fallacy does seem to apply.
The thing that I find odd, though I can’t find the source at the moment (I thought it was Goertzel’s article, but I didn’t find it by a quick skim; it may be in the comments somewhere), is that the SIAI seems to have had the Really Scary Idea first (we want Friendly AI, so we want to be the first to make it, since we can’t trust other people) and then progressed to the Scary Idea (hmm, we can’t trust ourselves to make a Friendly AI). I wonder if the originators of the Scary Idea forgot the Really Scary Idea or never feared it in the first place?
Making a superintelligence you don’t want before you make the superintelligence you do want, has the same consequences as someone else building a superintelligence you don’t want before you build the superintelligence you do want.
You might argue that you could make a less bad superintelligence that you don’t want than someone else, but we don’t care very much about the difference between tiling the universe with paperclips and tiling the universe with molecular smiley faces.
I’m sorry, but I extracted no novel information from this reply. I’m aware that FAI is a non-trivial problem, and I think work done on making AI more likely to be FAI has value.
But that doesn’t mean believing the Scary Idea, or discussing the Scary Idea without also discussing the Really Scary Idea, decreases the existential risk involved. The estimations involved have almost no dependence on evidence, and so it’s just comparison of priors, which does not seem sufficient to make a strong recommendation.
It may help if you view my objections as pointing out that the Scary Idea is privileging a hypothesis, not that the Scary Idea is something we should ignore.
No. Expecting a superintelligence to optimize for our specific values would be privileging a hypothesis. The “Scary Idea” is saying that most likely something else will happen.
I may have to start only writing thousand-word replies, in the hopes that I can communicate more clearly in such a format.
There are two aspects to the issue of how much work should be put into FAI as I understand it. The first I word like this- “the more thought we put into whether or not an AGI will be friendly, the more likely the AGI will be friendly.” The second I word like this- “the more thought we put into making our AGI, the less likely our AGI will be the AGI.” Both are wrapped up in the Scary Idea- the first part is it as normally stated, the second part is its unstated consequence. The value of believing the Scary Idea is the benefit of the first minus the cost of the second.
My understanding is that we have no good estimation of the value of the first aspect or the second aspect. This isn’t astronomy where we have a good idea of the number of asteroids out there and a pretty good idea of how they move through space. And so, to declare that the first aspect is stronger without evidence strikes me as related to privileging the hypothesis.
(I should note that I expect, without evidence, the problem of FAI to be simpler than the problem of AGI, and thus don’t think the Scary Idea has any policy implications besides “someone should work on FAI.” The risk that AGI gets solved before FAI means more people should work on FAI, not that less people should work on AGI.)
Expecting a superintelligence to optimize for our specific values would be privileging a hypothesis. The “Scary Idea” is saying that most likely something else will happen.
That is not exactly what Goertzel meant by “Scary Idea”. He wrote:
Roughly, the Scary Idea posits that: If I or anybody else actively trying to build advanced AGI succeeds, we’re highly likely to cause an involuntary end to the human race.
It seems to me that there may be a lot of wiggle room in between failing to “optimize for our specific values” and causing “an involuntary end to the human race”. The human race is not so automatically so fragile that it can only survive under the care of a god constructed in our own image.
Ok, holy crap. I am going to call this the Really Scary Idea. I had not thought there could be people out there who would actually value being first with the AGI over decreasing the risk of existential disaster, but it is entirely plausible. Thank you for highlighting this for me, I really am grateful. If a little concerned.
Mind projection fallacy, perhaps? I thought the human race was more important than being the guy who invented AGI, so everyone naturally thinks that?
To reply to my own quote, then:
It doesn’t matter what you are comfortable with, if the developer doesn’t have a term in their utility function for your comfort level. Even I have thought similar thoughts with regards to Luddites and such; drag them kicking and screaming into the future if we have to, etc.
And… mutual understanding in one!
I think the best way to think about it, since it helps keep the scope manageable and crystallize the relevant factors, is that it’s not “being first with the AGI” but “defining the future” (the first is the instrumental value, the second is the terminal value). That’s essentially what all existential risk management is about- defining the future, hopefully to not include the vanishing of us / our descendants.
But how you want to define the future- i.e. the most political terminal value you can have- is not written on the universe. So the mind projection fallacy does seem to apply.
The thing that I find odd, though I can’t find the source at the moment (I thought it was Goertzel’s article, but I didn’t find it by a quick skim; it may be in the comments somewhere), is that the SIAI seems to have had the Really Scary Idea first (we want Friendly AI, so we want to be the first to make it, since we can’t trust other people) and then progressed to the Scary Idea (hmm, we can’t trust ourselves to make a Friendly AI). I wonder if the originators of the Scary Idea forgot the Really Scary Idea or never feared it in the first place?
Making a superintelligence you don’t want before you make the superintelligence you do want, has the same consequences as someone else building a superintelligence you don’t want before you build the superintelligence you do want.
You might argue that you could make a less bad superintelligence that you don’t want than someone else, but we don’t care very much about the difference between tiling the universe with paperclips and tiling the universe with molecular smiley faces.
I’m sorry, but I extracted no novel information from this reply. I’m aware that FAI is a non-trivial problem, and I think work done on making AI more likely to be FAI has value.
But that doesn’t mean believing the Scary Idea, or discussing the Scary Idea without also discussing the Really Scary Idea, decreases the existential risk involved. The estimations involved have almost no dependence on evidence, and so it’s just comparison of priors, which does not seem sufficient to make a strong recommendation.
It may help if you view my objections as pointing out that the Scary Idea is privileging a hypothesis, not that the Scary Idea is something we should ignore.
No. Expecting a superintelligence to optimize for our specific values would be privileging a hypothesis. The “Scary Idea” is saying that most likely something else will happen.
I may have to start only writing thousand-word replies, in the hopes that I can communicate more clearly in such a format.
There are two aspects to the issue of how much work should be put into FAI as I understand it. The first I word like this- “the more thought we put into whether or not an AGI will be friendly, the more likely the AGI will be friendly.” The second I word like this- “the more thought we put into making our AGI, the less likely our AGI will be the AGI.” Both are wrapped up in the Scary Idea- the first part is it as normally stated, the second part is its unstated consequence. The value of believing the Scary Idea is the benefit of the first minus the cost of the second.
My understanding is that we have no good estimation of the value of the first aspect or the second aspect. This isn’t astronomy where we have a good idea of the number of asteroids out there and a pretty good idea of how they move through space. And so, to declare that the first aspect is stronger without evidence strikes me as related to privileging the hypothesis.
(I should note that I expect, without evidence, the problem of FAI to be simpler than the problem of AGI, and thus don’t think the Scary Idea has any policy implications besides “someone should work on FAI.” The risk that AGI gets solved before FAI means more people should work on FAI, not that less people should work on AGI.)
That is not exactly what Goertzel meant by “Scary Idea”. He wrote:
It seems to me that there may be a lot of wiggle room in between failing to “optimize for our specific values” and causing “an involuntary end to the human race”. The human race is not so automatically so fragile that it can only survive under the care of a god constructed in our own image.
Yes, what I described was not what Goertzel called the “Scary Idea”, but, in context, it describes the aspect of it that we were discussing.