those sound like secondhand positions to me. not like those people were the originators of the reasoning. I think a pause is likely to guarantee we die though. we need to actually resist all bad directions, which a pause just lets some people ignore. pauses could never be enforced well enough to save us without an already save us grade ai.
If AGI is sufficiently nontrivial, delay of a few years might be feasible and give time to decrease doom. If AGI requires enormous datacenters, outlawing production of GPUs and treating existing ones like illegal firearms might lead to indefinite delay (though it probably takes at least an observed disaster to enter Overton window).
I see your argument moving the conclusion (pause becoming worse than no-pause) when expecting imminent AGI that only really needs modest compute and doesn’t guarantee doom. Then the only pause that works is AI-enforced one, and nobody working on that can outweigh burning some alignment progress timeline.
a pause lets some people ignore the pause and move in bad directions. we need to be able as a civilization to prevent the tugs on society to get sucked into AIs. the AIs of today will take longer to kill us all, but they’ll still degrade our soul-data, compare YouTube recommender. authoritarian cultures that want to destroy humanity’s soul might agree not to make bigger ai, but today’s big ai is plenty. it’s not like not pausing is massively better; nothing but drastically speeding up safety and liberty-generating alignment could save us
Irrevocable soul-data degradation with modern AI within decades doesn’t seem likely though, if AI doesn’t develop further. AI that develops further in undesirable ways seems a much larger and faster threat to liberty/self-determination, even if it doesn’t kill everyone. And if not making bigger AI for a while is feasible, it gives time to figure out how to make better AI. If that time goes unused, that’s no improvement, but all else equal option to improve is better than its absence.
That’s the tradeoffs from hell we have to confront, there are arguments on either side of the decision. I was responding to your “AIs of today will take longer to kill us all, but they’ll still degrade our soul-data”, a claim about AIs of today (as opposed to AIs of tomorrow), not about human mortality. If AIs of tomorrow eat humanity’s soul outright, its degradation from mortality and forgetting is the lesser evil that persists while we get better at doing something about AIs of tomorrow. (There is also growth while humanity lives, hope to find a way forward, not only ongoing damage.)
Pause opposes development of AIs that prevent rogue AGIs. If rogue AGIs despite pause are likely, and it’s feasible to develop AIs sufficiently aligned to prevent rogue AGIs, then pause increases doom. The premises of the argument are contentious, but within its scope the argument seems valid.
those sound like secondhand positions to me. not like those people were the originators of the reasoning. I think a pause is likely to guarantee we die though. we need to actually resist all bad directions, which a pause just lets some people ignore. pauses could never be enforced well enough to save us without an already save us grade ai.
If AGI is sufficiently nontrivial, delay of a few years might be feasible and give time to decrease doom. If AGI requires enormous datacenters, outlawing production of GPUs and treating existing ones like illegal firearms might lead to indefinite delay (though it probably takes at least an observed disaster to enter Overton window).
I see your argument moving the conclusion (pause becoming worse than no-pause) when expecting imminent AGI that only really needs modest compute and doesn’t guarantee doom. Then the only pause that works is AI-enforced one, and nobody working on that can outweigh burning some alignment progress timeline.
How does a pause let us ignore bad directions?
a pause lets some people ignore the pause and move in bad directions. we need to be able as a civilization to prevent the tugs on society to get sucked into AIs. the AIs of today will take longer to kill us all, but they’ll still degrade our soul-data, compare YouTube recommender. authoritarian cultures that want to destroy humanity’s soul might agree not to make bigger ai, but today’s big ai is plenty. it’s not like not pausing is massively better; nothing but drastically speeding up safety and liberty-generating alignment could save us
Irrevocable soul-data degradation with modern AI within decades doesn’t seem likely though, if AI doesn’t develop further. AI that develops further in undesirable ways seems a much larger and faster threat to liberty/self-determination, even if it doesn’t kill everyone. And if not making bigger AI for a while is feasible, it gives time to figure out how to make better AI. If that time goes unused, that’s no improvement, but all else equal option to improve is better than its absence.
every death and every forgotten memory is soul degradation. we won’t be able to just reconstruct everyone exactly.
That’s the tradeoffs from hell we have to confront, there are arguments on either side of the decision. I was responding to your “AIs of today will take longer to kill us all, but they’ll still degrade our soul-data”, a claim about AIs of today (as opposed to AIs of tomorrow), not about human mortality. If AIs of tomorrow eat humanity’s soul outright, its degradation from mortality and forgetting is the lesser evil that persists while we get better at doing something about AIs of tomorrow. (There is also growth while humanity lives, hope to find a way forward, not only ongoing damage.)
Pause opposes development of AIs that prevent rogue AGIs. If rogue AGIs despite pause are likely, and it’s feasible to develop AIs sufficiently aligned to prevent rogue AGIs, then pause increases doom. The premises of the argument are contentious, but within its scope the argument seems valid.