Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:
There’s no evidence of a malign superintelligence existing currently, therefore it can be dismissed without evidence
We’re faking being worried because if we truly were, we would use violence
Yudkowsky is calling for violence
Saying something as important as the end of the world could happen could influence people to commit violence, therefore warning about the end of the world is bad
Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
When the time comes we’ll just figure it out
There were other new technologies that people warned would cause bad outcomes
We didn’t know whether nuclear experimentation would end the world but we went ahead with it anyway and we didn’t end the world (omitting that careful effort was put forth first to ensure this risk was miniscule)
My personal favorite: AI doom would happen in the future, and anything happening in the future is unfalsifiable, therefore it is not a scientific claim and should not be taken seriously.
This post was meant as a summary of common rebuttals. I haven’t actually heard much questioning of motivation, as instrumental convergence seems fairly intuitive. The more common question asked is how an AI could actually physically achieve the destruction.
Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:
There’s no evidence of a malign superintelligence existing currently, therefore it can be dismissed without evidence
We’re faking being worried because if we truly were, we would use violence
Yudkowsky is calling for violence
Saying something as important as the end of the world could happen could influence people to commit violence, therefore warning about the end of the world is bad
Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
When the time comes we’ll just figure it out
There were other new technologies that people warned would cause bad outcomes
We didn’t know whether nuclear experimentation would end the world but we went ahead with it anyway and we didn’t end the world (omitting that careful effort was put forth first to ensure this risk was miniscule)
My personal favorite: AI doom would happen in the future, and anything happening in the future is unfalsifiable, therefore it is not a scientific claim and should not be taken seriously.
Currently, they seem to have a lot of trouble explaining the motivation. The “How” steps are a lot easier.
This post was meant as a summary of common rebuttals. I haven’t actually heard much questioning of motivation, as instrumental convergence seems fairly intuitive. The more common question asked is how an AI could actually physically achieve the destruction.