We still have a hard problem since misuse of AI, for example using it to secure permanent control over the world, would be extremely tempting. Under this assumption outcomes where everyone doesn’t die but which are as bad or worse are much more likely than they would be under its negation. I think the answer to avoiding non awful futures looks similar, we agree globally to slow down before the tech could plausibly pose a big risk, probably that means right around yesterday. Except instead of just using the extra time to do scientific research we also make the appropriate changes to our societies/governments.
We still have a hard problem since misuse of AI, for example using it to secure permanent control over the world, would be extremely tempting. Under this assumption outcomes where everyone doesn’t die but which are as bad or worse are much more likely than they would be under its negation. I think the answer to avoiding non awful futures looks similar, we agree globally to slow down before the tech could plausibly pose a big risk, probably that means right around yesterday. Except instead of just using the extra time to do scientific research we also make the appropriate changes to our societies/governments.