People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?
You make the mistake of equating something being generally banned and it not happening. Selling MDMA is generally banned. On the other hand it’s still possible to purchase it in many places.
As a stronger argument to your point—In Australia nearly no one owns guns; its very difficult to get guns and I certainly know of no-one who has one. However I am completely confident that I can call my shadiest friend and he could call his shadiest of friend (and possibly to a 3rd degree—his friend) and within 7 days I could have a gun for the low-low price of “some monetary compensation”.
The rural Australia figure is for number of people, not number of guns. But when you’re comparing it to America, you’re comparing it to number of guns. This compares apples and oranges.
My thought is that (just as the FAI problem) the problem requires an invention, namely a way to engineer the world order such that this ban is effective (for example by fundamentally altering culture and traditions, by using mass surveillance, by reversing the development and restricting the fabrication of computational resources, or by highly regulating the access to certain commodities and resources required for computation such as electricity and silicon).
Let’s steelman his argument into “Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?”. If you find another reason why the first option wouldn’t work, include the desperate effort needed to overcome that problem in the calculation.
Me minutes after writing that: “I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research.”
It’s extremely hard to ban the research worldwide, and then it’s extremely hard to enforce such decision.
Firstly, you’ll have to convince all the world’s governments (btw, there are >200) to pass such laws.
Then, you’ll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.
And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?
The humanity hasn’t prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?
If society doesn’t end first, banning X-risks research worldwide is an effort that must be prolonged indefinitely, always ensuring that nobody ever fiddles with her computer in a way that could create an AGI. This means that with time the probability to enforce successfully the ban always decreases. Building an FAI instead, is an effort that once accomplished stays so: its probability, however small, might even increase with time.
That last part plays a role in my thinking. But I’d consider the world ban idea if I thought for a second that we could convince, not only China, but every nation-level player that might pose a threat. If you’re imagining a UN ban that does the job, you have either a mental picture of AI research or a level of confidence in the UN that I find bizarre.
People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?
You make the mistake of equating something being generally banned and it not happening. Selling MDMA is generally banned. On the other hand it’s still possible to purchase it in many places.
As a stronger argument to your point—In Australia nearly no one owns guns; its very difficult to get guns and I certainly know of no-one who has one. However I am completely confident that I can call my shadiest friend and he could call his shadiest of friend (and possibly to a 3rd degree—his friend) and within 7 days I could have a gun for the low-low price of “some monetary compensation”.
I’m sure some people in rural areas do. Wiki says:
And that’s only people who legally own guns, of course.
okay yes rural guns exist. That still leaves 20million+ of population without access. Compared to america where there are more guns than people...
The rural Australia figure is for number of people, not number of guns. But when you’re comparing it to America, you’re comparing it to number of guns. This compares apples and oranges.
certainly; this pointless tangent is becoming more of a statement about gun culture than about banning substances.
The fact that bans have a poor track record in human history does not imply that they are impossible, does it?
My thought is that (just as the FAI problem) the problem requires an invention, namely a way to engineer the world order such that this ban is effective (for example by fundamentally altering culture and traditions, by using mass surveillance, by reversing the development and restricting the fabrication of computational resources, or by highly regulating the access to certain commodities and resources required for computation such as electricity and silicon).
“I take over the world and create to create a unified totalitarian state” is a solution that comes with it’s own existential risks.
Let’s steelman his argument into “Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?”. If you find another reason why the first option wouldn’t work, include the desperate effort needed to overcome that problem in the calculation.
I don’t think “existential risk research” and “research associated with existential risks” are the same thing.
Yes, that’s what I meant. Let me edit that.
Me minutes after writing that: “I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research.”
It’s extremely hard to ban the research worldwide, and then it’s extremely hard to enforce such decision.
Firstly, you’ll have to convince all the world’s governments (btw, there are >200) to pass such laws.
Then, you’ll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.
And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?
The humanity hasn’t prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?
If society doesn’t end first, banning X-risks research worldwide is an effort that must be prolonged indefinitely, always ensuring that nobody ever fiddles with her computer in a way that could create an AGI. This means that with time the probability to enforce successfully the ban always decreases.
Building an FAI instead, is an effort that once accomplished stays so: its probability, however small, might even increase with time.
That last part plays a role in my thinking. But I’d consider the world ban idea if I thought for a second that we could convince, not only China, but every nation-level player that might pose a threat. If you’re imagining a UN ban that does the job, you have either a mental picture of AI research or a level of confidence in the UN that I find bizarre.