We know economic growth cannot be sustainable long term (do the math), if we were to think longer term (Rather than the current “race to AI” thinking), we would still need to decide when to stop pursuing economic growth. Maybe since we have to stop growth somewhere, stopping it before AI is a good idea.
We have to cope with humanity as it is, at least until there’s an AI overlord. Ignoring the other counterarguments for a moment, does voluntarily stopping economic expansion and halting technological progress sound like something humanity is capable of doing?
I agree, humanity is probably not capable of stopping economic growth. And personally I’m glad of that, because I would choose UFAI over humanity in its current form (partly because in this form humanity is likely to destroy itself before long).
The point I was making is that if you value UFAI as negatively as EY does, maybe avoiding UFAI is worth avoiding all AI.
We know economic growth cannot be sustainable long term (do the math), if we were to think longer term (Rather than the current “race to AI” thinking), we would still need to decide when to stop pursuing economic growth. Maybe since we have to stop growth somewhere, stopping it before AI is a good idea.
We have to cope with humanity as it is, at least until there’s an AI overlord. Ignoring the other counterarguments for a moment, does voluntarily stopping economic expansion and halting technological progress sound like something humanity is capable of doing?
I agree, humanity is probably not capable of stopping economic growth. And personally I’m glad of that, because I would choose UFAI over humanity in its current form (partly because in this form humanity is likely to destroy itself before long).
The point I was making is that if you value UFAI as negatively as EY does, maybe avoiding UFAI is worth avoiding all AI.