The MINIMUM amount of backwards time travel(only one step backwards, and then the remainder of the person’s life is lived) and the MINIMUM amount of resources(let’s say USD in XXXX year) a person would need to be sure that any problems associated with existential risks will be handled adequately(i.e. human flourishing). Any specific person can be sent back, we can assume they’re completely motivated to the task and have the necessary domain knowledge. Convincing everyone you’re from the future does not count.
Alternately, if this seems insufficient, what specific extra knowledge would need to be brought back that we may not know right now.
The rationale for this post is to get a better idea of what a successful AI governance or alignment plan would have looked like.
I think these questions are pretty bad at us. Counterfactuals are confusing regardless, but very complex states of the universe and actions of tens of thousands of significant people (and billions of unknown-influence) are truly impossible to model well enough to know what “could have happened” even means.
For past x-risks, we survived, and the world is a vast intricate strongly-interacting non-linear system, so any perturbation of the past (even one that sure looks like it would have made the Cuban Missile Crisis a less close-run thing, say) will have snowballing side-effects propagating in all directions that we simply can’t predict, and is thus a bad idea. Or possibly even some side-effect we can: maybe the Cuban Missile Crisis being close-run was a necessary shock to the system to make people on both sides of the Cold War more careful about brinkmanship and keener on detente? So far, we’ve survived, so (if you had a time machine) don’t reroll the dice.
I know, this doesn’t help with your goal. My point is, we don’t know yet.
I think it could at the very least be useful to go back just 5-20 years to share alignment progress and the story of how the future played out with LLMs.
[Question] What is the minimum amount of time travel and resources needed to secure the future?
The MINIMUM amount of backwards time travel(only one step backwards, and then the remainder of the person’s life is lived) and the MINIMUM amount of resources(let’s say USD in XXXX year) a person would need to be sure that any problems associated with existential risks will be handled adequately(i.e. human flourishing). Any specific person can be sent back, we can assume they’re completely motivated to the task and have the necessary domain knowledge. Convincing everyone you’re from the future does not count.
Alternately, if this seems insufficient, what specific extra knowledge would need to be brought back that we may not know right now.
The rationale for this post is to get a better idea of what a successful AI governance or alignment plan would have looked like.
I asked a similar question about WWI rather than x-risk. It seems that we are pretty bad at these kinds of questions
I think these questions are pretty bad at us. Counterfactuals are confusing regardless, but very complex states of the universe and actions of tens of thousands of significant people (and billions of unknown-influence) are truly impossible to model well enough to know what “could have happened” even means.
One butterfly flapping its wings just right instead of the way it actually did.
For future x-risks, we don’t know yet.
For past x-risks, we survived, and the world is a vast intricate strongly-interacting non-linear system, so any perturbation of the past (even one that sure looks like it would have made the Cuban Missile Crisis a less close-run thing, say) will have snowballing side-effects propagating in all directions that we simply can’t predict, and is thus a bad idea. Or possibly even some side-effect we can: maybe the Cuban Missile Crisis being close-run was a necessary shock to the system to make people on both sides of the Cold War more careful about brinkmanship and keener on detente? So far, we’ve survived, so (if you had a time machine) don’t reroll the dice.
I know, this doesn’t help with your goal. My point is, we don’t know yet.
I think it could at the very least be useful to go back just 5-20 years to share alignment progress and the story of how the future played out with LLMs.