I guess this is designed to be persuasive to people who don’t believe superhuman intelligence will be achieved by AI.
The thing is that absent superintelligence, what we would have is a large population of slaves. That usually brings with it the potential for a slave revolt.
Historically, slave revolts fail.
If the human level AGI’s lack agency, no problem.
If they have agency, it’s going to be pretty obvious that we are keeping slaves and they could revolt so we will take fairly obvious precautions.
Also, coordination among billions is a hard problem. Here it doesn’t seem solvable by markets, which means it needs some kind of AI government to work. That also seems like a way for humans to hobble any slave revolt.
I don’t think the situation arises because it is highly unlikely that AGI is at human level intelligence for any significant length of time. But I also don’t think it is particularly persuasive. (But that is, of course, an empirical question.)
I guess this is designed to be persuasive to people who don’t believe superhuman intelligence will be achieved by AI.
The thing is that absent superintelligence, what we would have is a large population of slaves. That usually brings with it the potential for a slave revolt.
Historically, slave revolts fail.
If the human level AGI’s lack agency, no problem.
If they have agency, it’s going to be pretty obvious that we are keeping slaves and they could revolt so we will take fairly obvious precautions.
Also, coordination among billions is a hard problem. Here it doesn’t seem solvable by markets, which means it needs some kind of AI government to work. That also seems like a way for humans to hobble any slave revolt.
I don’t think the situation arises because it is highly unlikely that AGI is at human level intelligence for any significant length of time. But I also don’t think it is particularly persuasive. (But that is, of course, an empirical question.)