Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant
A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Another important point is that, sure, people won’t need to do work, which means they will be unnecessary to the economy, barring some pretty sharp human enhancement. But this downside, along with all the other downsides, looks extremely small compared to the non-AGI default of dying of aging and having a 1⁄3 chance of getting dementia, 40% chance of getting cancer, your loved ones dying, etc.
assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything
This isn’t clear to me: does every option that involves someone being forcibly mandated to do something qualify as a catastrophe? Conceptually, there seems to be a lot of room between the two.
I understand the analogy in Katja’s post as being: even in a great post-AGI world, everyone is forced to move to a post-AGI world. That world has higher GDP/capita, but it doesn’t necessarily contain the specific things people value about their current lives.
Just listing all the positive aspects of living in NYC (even if they’re very positive) might not remove all hesitation: I know my local community, my local parks, the beloved local festival that happens in August.
If all diseases have been cured in NYC and I’m hesitant because I’ll miss out on the festival, I’m probably not adequately taking the benefits into account. But if you tell me not to worry at all about moving to NYC, you’re also not taking all the costs into account / aren’t talking in a way that will connect with me.
A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.
A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Another important point is that, sure, people won’t need to do work, which means they will be unnecessary to the economy, barring some pretty sharp human enhancement. But this downside, along with all the other downsides, looks extremely small compared to the non-AGI default of dying of aging and having a 1⁄3 chance of getting dementia, 40% chance of getting cancer, your loved ones dying, etc.
This isn’t clear to me: does every option that involves someone being forcibly mandated to do something qualify as a catastrophe? Conceptually, there seems to be a lot of room between the two.
I understand the analogy in Katja’s post as being: even in a great post-AGI world, everyone is forced to move to a post-AGI world. That world has higher GDP/capita, but it doesn’t necessarily contain the specific things people value about their current lives.
Just listing all the positive aspects of living in NYC (even if they’re very positive) might not remove all hesitation: I know my local community, my local parks, the beloved local festival that happens in August.
If all diseases have been cured in NYC and I’m hesitant because I’ll miss out on the festival, I’m probably not adequately taking the benefits into account. But if you tell me not to worry at all about moving to NYC, you’re also not taking all the costs into account / aren’t talking in a way that will connect with me.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.