a pause lets some people ignore the pause and move in bad directions. we need to be able as a civilization to prevent the tugs on society to get sucked into AIs. the AIs of today will take longer to kill us all, but they’ll still degrade our soul-data, compare YouTube recommender. authoritarian cultures that want to destroy humanity’s soul might agree not to make bigger ai, but today’s big ai is plenty. it’s not like not pausing is massively better; nothing but drastically speeding up safety and liberty-generating alignment could save us
Irrevocable soul-data degradation with modern AI within decades doesn’t seem likely though, if AI doesn’t develop further. AI that develops further in undesirable ways seems a much larger and faster threat to liberty/self-determination, even if it doesn’t kill everyone. And if not making bigger AI for a while is feasible, it gives time to figure out how to make better AI. If that time goes unused, that’s no improvement, but all else equal option to improve is better than its absence.
That’s the tradeoffs from hell we have to confront, there are arguments on either side of the decision. I was responding to your “AIs of today will take longer to kill us all, but they’ll still degrade our soul-data”, a claim about AIs of today (as opposed to AIs of tomorrow), not about human mortality. If AIs of tomorrow eat humanity’s soul outright, its degradation from mortality and forgetting is the lesser evil that persists while we get better at doing something about AIs of tomorrow. (There is also growth while humanity lives, hope to find a way forward, not only ongoing damage.)
a pause lets some people ignore the pause and move in bad directions. we need to be able as a civilization to prevent the tugs on society to get sucked into AIs. the AIs of today will take longer to kill us all, but they’ll still degrade our soul-data, compare YouTube recommender. authoritarian cultures that want to destroy humanity’s soul might agree not to make bigger ai, but today’s big ai is plenty. it’s not like not pausing is massively better; nothing but drastically speeding up safety and liberty-generating alignment could save us
Irrevocable soul-data degradation with modern AI within decades doesn’t seem likely though, if AI doesn’t develop further. AI that develops further in undesirable ways seems a much larger and faster threat to liberty/self-determination, even if it doesn’t kill everyone. And if not making bigger AI for a while is feasible, it gives time to figure out how to make better AI. If that time goes unused, that’s no improvement, but all else equal option to improve is better than its absence.
every death and every forgotten memory is soul degradation. we won’t be able to just reconstruct everyone exactly.
That’s the tradeoffs from hell we have to confront, there are arguments on either side of the decision. I was responding to your “AIs of today will take longer to kill us all, but they’ll still degrade our soul-data”, a claim about AIs of today (as opposed to AIs of tomorrow), not about human mortality. If AIs of tomorrow eat humanity’s soul outright, its degradation from mortality and forgetting is the lesser evil that persists while we get better at doing something about AIs of tomorrow. (There is also growth while humanity lives, hope to find a way forward, not only ongoing damage.)