If a simple philosophical argument can cut the expected odds of AI doom by an order of magnitude, we might not change our current plans, but it suggests that we have a lot of confusion on the topic that further research might alleviate.
And more generally, “the world where we almost certainly get killed by ASI” and “The world where we have an 80% chance of getting killed by ASI” are different worlds, and, ignoring motives to lie for propaganda purposes, if we actually live in the latter we should not say we live in the former.
It’s the first, there’s a lot of uncertainty. I don’t think anyone is lying deliberately, although everyone’s beliefs tend to follow what they think will produce good outcomes. This is called motivated reasoning.
I don’t think this changes the situation much, except to make it harder to coordinate. Rushing full speed ahead while we don’t even know the dangers is pretty dumb. But some people really believe the dangers are small so they’re going to rush ahead. There aren’t strong arguments or a strong consensus for the danger being extremely high, even though looking at opinions of the most thorough thinkers puts risks in the alarmingly high, 50‰ plus range.
Add to this disagreement the fact that most people are neither longtermist nor utilitarian; they’d like a chance to get rich and live forever even if it risks humanity’s future.
If a simple philosophical argument can cut the expected odds of AI doom by an order of magnitude, we might not change our current plans, but it suggests that we have a lot of confusion on the topic that further research might alleviate.
And more generally, “the world where we almost certainly get killed by ASI” and “The world where we have an 80% chance of getting killed by ASI” are different worlds, and, ignoring motives to lie for propaganda purposes, if we actually live in the latter we should not say we live in the former.
It’s the first, there’s a lot of uncertainty. I don’t think anyone is lying deliberately, although everyone’s beliefs tend to follow what they think will produce good outcomes. This is called motivated reasoning.
I don’t think this changes the situation much, except to make it harder to coordinate. Rushing full speed ahead while we don’t even know the dangers is pretty dumb. But some people really believe the dangers are small so they’re going to rush ahead. There aren’t strong arguments or a strong consensus for the danger being extremely high, even though looking at opinions of the most thorough thinkers puts risks in the alarmingly high, 50‰ plus range.
Add to this disagreement the fact that most people are neither longtermist nor utilitarian; they’d like a chance to get rich and live forever even if it risks humanity’s future.