(In case worrying about those is something you’d find fun, then you could choose to experience contexts where you still would, like complex game/fantasy worlds.)
To be more precise: extrapolated over time, for any undesired selection process or other problem of that kind, either the problem is large enough that it gets exarcerbated over time so much that it eats everything — and then that’s just extinction, but slower — or it’s not large enough to win out and aligned superintelligence(s) + coordinated human action is enough to stamp it out in the long run, which means they won’t be an issue for almost all of the future.
It seems like for a problem to be just large enough that coordination doesn’t stamp it away, but also it doesn’t eat everything, would be a very fragile equilibrium, and I think that’s pretty unlikely.
Because a benevolent ASI would make everything okay.
(In case worrying about those is something you’d find fun, then you could choose to experience contexts where you still would, like complex game/fantasy worlds.)
To be more precise: extrapolated over time, for any undesired selection process or other problem of that kind, either the problem is large enough that it gets exarcerbated over time so much that it eats everything — and then that’s just extinction, but slower — or it’s not large enough to win out and aligned superintelligence(s) + coordinated human action is enough to stamp it out in the long run, which means they won’t be an issue for almost all of the future.
It seems like for a problem to be just large enough that coordination doesn’t stamp it away, but also it doesn’t eat everything, would be a very fragile equilibrium, and I think that’s pretty unlikely.