I think the world is already full of probably unfriendly supra-human intelligences...
This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.
It’s not that I think the logic of this argument is incorrect so much as I think there is another related problem that we should be worrying about more.
The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them. If you’re looking for a more general organization to support, I’d suggest Oxford’s Future of Humanity Institute.
I’m going to assert my position on them here without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
Since an organization’s optimization power includes optimization power gained from information technology, I think that the “AI Advantages” in section 3.1 mostly apply just as well to organizations. Do you see an exception?
This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.
Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a dominant organization would still preserve some people and so is less ‘interesting’.
Two responses:
First, a dominant organization seems like the perfect vehicle for a rogue AI, since it would already have all resources centralized and ready for AI hijacking. So, a study of the present dynamics between superintelligent organizations is important to the prediction of hard takeoff machine superintelligence.
Second, while I once again risk getting political at this point, I’d argue that an overriding concern for the total existence of humanity only makes sense if one doesn’t have any skin in the game of any of the other power dynamics going on. I believe there are ethical reasons for being concerned with some of these other games. That is well beyond the scope of this post.
The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them.
That’s clear.
This sounds awfully suspicious. Are you sure you don’t have the bottom line precomputed?
Honestly, I don’t follow the line of reasoning in the post you’ve linked to. Could you summarize in your own terms?
My reason for not providing arguments up front is because I think excessive verbiage impairs readability. I would rather present justifications that are relevant to my interlocutor’s objections than try to predict everything up front. Indeed, I can’t predict all objections up front, since this audience has more information than I have available.
However, since I have faith that we are all in the same game of legitimate truth-seeking, I’m willing to pursue dialectical argumentation until it converges.
How long did it take you to come up with this line of reasoning?
I guess over 27 years. But I stand on the shoulders of giants.
I agree that certain “organizations” can be very, very dangerous. That’s one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).
I hold that Unfriendly AI+ will be more dangerous, but, if these “organizations” are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I’d be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky’s relevant paper.
This post doesn’t come close to refuting Intelligence Explosion: Evidence and Import.
That’s true, but intelligence as defined in this context is not merely optimization power, but efficient cross-domain optimization power. There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.
The Singularity Institute is completely aware that there are other existential risks to humanity; its purpose is to deal with one of them. If you’re looking for a more general organization to support, I’d suggest Oxford’s Future of Humanity Institute.
This sounds awfully suspicious. Are you sure you don’t have the bottom line precomputed?
How long did it take you to come up with this line of reasoning?
Since an organization’s optimization power includes optimization power gained from information technology, I think that the “AI Advantages” in section 3.1 mostly apply just as well to organizations. Do you see an exception?
Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a dominant organization would still preserve some people and so is less ‘interesting’.
Two responses:
First, a dominant organization seems like the perfect vehicle for a rogue AI, since it would already have all resources centralized and ready for AI hijacking. So, a study of the present dynamics between superintelligent organizations is important to the prediction of hard takeoff machine superintelligence.
Second, while I once again risk getting political at this point, I’d argue that an overriding concern for the total existence of humanity only makes sense if one doesn’t have any skin in the game of any of the other power dynamics going on. I believe there are ethical reasons for being concerned with some of these other games. That is well beyond the scope of this post.
That’s clear.
Honestly, I don’t follow the line of reasoning in the post you’ve linked to. Could you summarize in your own terms?
My reason for not providing arguments up front is because I think excessive verbiage impairs readability. I would rather present justifications that are relevant to my interlocutor’s objections than try to predict everything up front. Indeed, I can’t predict all objections up front, since this audience has more information than I have available.
However, since I have faith that we are all in the same game of legitimate truth-seeking, I’m willing to pursue dialectical argumentation until it converges.
I guess over 27 years. But I stand on the shoulders of giants.
Thanks for the quick reply.
I agree that certain “organizations” can be very, very dangerous. That’s one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).
I hold that Unfriendly AI+ will be more dangerous, but, if these “organizations” are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I’d be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky’s relevant paper.