Hmm, I find it plausible that we will have that on average p(build unaligned AGI | can build unaligned AGI) is about 0.01, which implies that unaligned AGI is built when there are ~100 actors that can build AGI, which seems to fit many-people-can-build-AGI.
The 0.01 probability could happen because of regulations / laws, as you mention, but also if the world has sufficient common knowledge of the risks of unaligned AGI (which seems not implausible to me, perhaps because of warning shots, or because of our research, or because of natural human risk aversion).
I guess you are more optimistic than me about humanity. :) I hope you are right!
Good point about the warning shots leading to common knowledge thing. I am pessimistic that mere argumentation and awareness-raising will be able to achieve an effect that large, but combined with a warning shot it might.
But I am skeptical that we’ll get sufficiently severe warning shots. I think that by the time AGI gets smart enough to cause serious damage, it’ll also be smart enough to guess that humans would punish it for doing so, and that it would be better off biding its time.
I guess you are more optimistic than me about humanity. :) I hope you are right!
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.
Hmm, I find it plausible that we will have that on average p(build unaligned AGI | can build unaligned AGI) is about 0.01, which implies that unaligned AGI is built when there are ~100 actors that can build AGI, which seems to fit many-people-can-build-AGI.
The 0.01 probability could happen because of regulations / laws, as you mention, but also if the world has sufficient common knowledge of the risks of unaligned AGI (which seems not implausible to me, perhaps because of warning shots, or because of our research, or because of natural human risk aversion).
I guess you are more optimistic than me about humanity. :) I hope you are right!
Good point about the warning shots leading to common knowledge thing. I am pessimistic that mere argumentation and awareness-raising will be able to achieve an effect that large, but combined with a warning shot it might.
But I am skeptical that we’ll get sufficiently severe warning shots. I think that by the time AGI gets smart enough to cause serious damage, it’ll also be smart enough to guess that humans would punish it for doing so, and that it would be better off biding its time.
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.