Yes, that would qualify. Even if it turns out that the AI was actually treacherous and kills us all later, that would still fall under a class of world-models he disclaimed.
Likewise there was a disagreement about speed of world GDP growth before transformative AI.
We might know that Eliezer is right, for a very short time before we all die.
As far as I understand his model, there might be a few AGIs that work like this but if you have 9 AGI’s like this and then someone creates an AGI that optimizes for world domination, then that AGI is likely going to take over.
A few good AGI’s that don’t make pivotal moves don’t end the risk.
Yes, that would qualify. Even if it turns out that the AI was actually treacherous and kills us all later, that would still fall under a class of world-models he disclaimed.
Likewise there was a disagreement about speed of world GDP growth before transformative AI.
We might know that Eliezer is right, for a very short time before we all die.
As far as I understand his model, there might be a few AGIs that work like this but if you have 9 AGI’s like this and then someone creates an AGI that optimizes for world domination, then that AGI is likely going to take over.
A few good AGI’s that don’t make pivotal moves don’t end the risk.