i’ve written before about how aligned-AI utopia can very much conserve much of what we value now, including doing effort to achieve things that are meaningful to ourselves or other real humans. on top of alleviating all (unconsented) suffering and (unconsented) scarcty and all the other “basics”, of course.
and without aligned-AI utopia we pretty much die-for-sure. there aren’t really attractors in-between those two.
That’s my guess too, but I’m not highly confident in the [no attractors between those two] part.
It seems conceivable to have a not-quite-perfect alignment solution with a not-quite-perfect self-correction mechanism that ends up orbiting utopia, but neither getting there, nor being flung off into oblivion.
It’s not obvious that this is an unstable, knife-edge configuration. It seems possible to have correction/improvement be easier at a greater distance from utopia. (whether that correction/improvement is triggered by our own agency, or other systems)
If stable orbits exist, it’s not obvious that they’d be configurations we’d endorse (or that the things we’d become would endorse them).
okay, thining about it more, i think the reason i believe this is because of a slack-vs-moloch situation.
if we get a lesser utopia, do we have enough slack to build up a better utopia, even slowly? if not, do we have enough slack to survive-at-all?
i feel like “we have exactly enough slack to live but not improve our condition” is a pretty unlikely state of affairs; most likely, either we don’t have enough slack to survive (and we die, though maybe slowly) or we have more than enough to survive (and we improve our condition, though maybe slowly, all the way to the greater-utopia-we-didn’t-start-with).
i’ve written before about how aligned-AI utopia can very much conserve much of what we value now, including doing effort to achieve things that are meaningful to ourselves or other real humans. on top of alleviating all (unconsented) suffering and (unconsented) scarcty and all the other “basics”, of course.
and without aligned-AI utopia we pretty much die-for-sure. there aren’t really attractors in-between those two.
That’s my guess too, but I’m not highly confident in the [no attractors between those two] part.
It seems conceivable to have a not-quite-perfect alignment solution with a not-quite-perfect self-correction mechanism that ends up orbiting utopia, but neither getting there, nor being flung off into oblivion.
It’s not obvious that this is an unstable, knife-edge configuration. It seems possible to have correction/improvement be easier at a greater distance from utopia. (whether that correction/improvement is triggered by our own agency, or other systems)
If stable orbits exist, it’s not obvious that they’d be configurations we’d endorse (or that the things we’d become would endorse them).
okay, thining about it more, i think the reason i believe this is because of a slack-vs-moloch situation.
if we get a lesser utopia, do we have enough slack to build up a better utopia, even slowly? if not, do we have enough slack to survive-at-all?
i feel like “we have exactly enough slack to live but not improve our condition” is a pretty unlikely state of affairs; most likely, either we don’t have enough slack to survive (and we die, though maybe slowly) or we have more than enough to survive (and we improve our condition, though maybe slowly, all the way to the greater-utopia-we-didn’t-start-with).