Hmm, maybe I misunderstood your point. I thought you were talking about using simulations to anthropically capture AIs. As in, creating more observer moments where AIs take over less competent civilizations but are actually in a simulation run by us.
If you’re happy to replace “simulation” with “prediction in a way that doesn’t create observer moments” and think the argument goes through either way then I think I agree.
I agree that paying out to less competent civilizations if we find out we’re competent and avoid takeover might be what you should do (as part of a post-hoc insurance deal via UDT or as part of a commitment or whatever). As in, this would help avoid getting killed if you ended up being a less competent civilization.
The smaller thing won’t work exactly for getting us bailed out. I think infinite ethics should be resolvable and end up getting resolved with something roughly similar to some notion of reality-fluid and this implies that you just have to pay more for higher measure places. (Of course people might disagree about the measure etc.)
I’m happy to replace “simulation” with “prediction in a way that doesn’t create observer moments” if we assume we are dealing with UDT agents (which I’m unsure about) and that it’s possible to run accurate predictions about the decisions of complex agents without creating observer moments (which I’m also unsure about). I think running simulations, by some meaning of “simulation” is not really more expensive than getting the accurate predictions, and he cost of running the sims is likely small compared to the size of the payment anyway. So I like talking about running sims, in case we get an AI that takes sims more seriously than prediction-based acausal trade, but I try to pay attention that all my proposals make sense from the perspective of a UDT agent too with predictions instead of simulations. (Exception is the Can we get more than this? proposal which relies on the AI not being UDT, and I agree it’s likely to fail for various reasons, but I decided it was still worth including in the post, in case we get an AI for which this actually works, which I still don’t find that extremely unlikely.)
Hmm, maybe I misunderstood your point. I thought you were talking about using simulations to anthropically capture AIs. As in, creating more observer moments where AIs take over less competent civilizations but are actually in a simulation run by us.
If you’re happy to replace “simulation” with “prediction in a way that doesn’t create observer moments” and think the argument goes through either way then I think I agree.
I agree that paying out to less competent civilizations if we find out we’re competent and avoid takeover might be what you should do (as part of a post-hoc insurance deal via UDT or as part of a commitment or whatever). As in, this would help avoid getting killed if you ended up being a less competent civilization.
The smaller thing won’t work exactly for getting us bailed out. I think infinite ethics should be resolvable and end up getting resolved with something roughly similar to some notion of reality-fluid and this implies that you just have to pay more for higher measure places. (Of course people might disagree about the measure etc.)
I’m happy to replace “simulation” with “prediction in a way that doesn’t create observer moments” if we assume we are dealing with UDT agents (which I’m unsure about) and that it’s possible to run accurate predictions about the decisions of complex agents without creating observer moments (which I’m also unsure about). I think running simulations, by some meaning of “simulation” is not really more expensive than getting the accurate predictions, and he cost of running the sims is likely small compared to the size of the payment anyway. So I like talking about running sims, in case we get an AI that takes sims more seriously than prediction-based acausal trade, but I try to pay attention that all my proposals make sense from the perspective of a UDT agent too with predictions instead of simulations. (Exception is the Can we get more than this? proposal which relies on the AI not being UDT, and I agree it’s likely to fail for various reasons, but I decided it was still worth including in the post, in case we get an AI for which this actually works, which I still don’t find that extremely unlikely.)