Basic Q: has anyone written much down about what sorts of endgame strategies you’d see just-before-ASI from the perspective of “it’s about to go well, and we want to maximize the benefits of it” ?
For example: if we saw OpenPhil suddenly make a massive push to just mitigate mortality at the cost of literally every other development goal they have, I might suspect that they suspect that we’re about to all be immortal under ASI, and they’re trying to get as many people possible to that future…
A lot of powerful people would focus on being the ones to control it when it happens, so they’d control the future—and not be subject to some else’s control of the future. OpenPhil is about the only org that would think first of the public benefit and not the dangers of other humans controlling it. And not a terribly powerful org, particularly relative to governments.
I was being intentionally broad, here. I am probably less interested for purposes of this particular post only in the question of “who controls the future” swerves and more about “what else would interested, agentic actors do” questions.
It is not at all clear to me that OpenPhil is the only org who feels this way—I can think of several non-EA-ish charities that if they genuinely 100% believed “none of the people you care for will die of the evils you fight if you can just keep them alive for the next 90 days” would plausibly do some interestingly agentic stuff.
Basic Q: has anyone written much down about what sorts of endgame strategies you’d see just-before-ASI from the perspective of “it’s about to go well, and we want to maximize the benefits of it” ?
For example: if we saw OpenPhil suddenly make a massive push to just mitigate mortality at the cost of literally every other development goal they have, I might suspect that they suspect that we’re about to all be immortal under ASI, and they’re trying to get as many people possible to that future…
My guess is that we wouldn’t actually know with high confidence before (and likely even some time after) things-will-definitely-be-fine.
E.g. 3 months after safe ASI people might still be publishing their alignment takes.
Oh, to be clear I’m not sure this is at all actually likely, but I was curious if anyone had explored the possibility conditional on it being likely
Endgame strategies from who?
A lot of powerful people would focus on being the ones to control it when it happens, so they’d control the future—and not be subject to some else’s control of the future. OpenPhil is about the only org that would think first of the public benefit and not the dangers of other humans controlling it. And not a terribly powerful org, particularly relative to governments.
I was being intentionally broad, here. I am probably less interested for purposes of this particular post only in the question of “who controls the future” swerves and more about “what else would interested, agentic actors do” questions.
It is not at all clear to me that OpenPhil is the only org who feels this way—I can think of several non-EA-ish charities that if they genuinely 100% believed “none of the people you care for will die of the evils you fight if you can just keep them alive for the next 90 days” would plausibly do some interestingly agentic stuff.