When you say “Tegmark IV,” I assume you mean the computable version—right?
Yep.
We have this sort of symmetry-breaker in the version of the argument that postulates, by fiat, a “UP-using dupe” somewhere, for some reason
Correction: on my model, the dupe is also using an approximation of the UP, not the UP itself. I. e., it doesn’t need to be uncomputable. The difference between it and the con men is just the naivety of the design. It generates guesses regarding what universes it’s most likely to be in (potentially using abstract reasoning), but then doesn’t “filter” these universes; doesn’t actually “look inside” and determine if it’s a good idea to use a specific universe as a model. It doesn’t consider the possibility of being manipulated through it; doesn’t consider the possibility that it contains daemons.
I. e.: the real difference is that the “dupe” is using causal decision theory, not functional decision theory.
We can just notice that we’d all be better off if no one did the malign thing, and then no one will do it
I think that’s plausible: that there aren’t actually that many “UP-using dupes” in existence, so the con men don’t actually care to stage these acausal attacks.
But: if that is the case, it’s because the entities designing/becoming powerful agents considered the possibility of con men manipulating the UP, and so made sure that they’re not just naively using the unfiltered (approximation of the) UP.
That is: yes, it seems likely that the equilibrium state of affairs here is “nobody is actually messing with the UP”. But it’s because everyone knows the UP could be messed with in this manner, so no-one is using it (nor its computationally tractable approximations).
It might also not be the case, however. Maybe there are large swathes of reality populated by powerful yet naive agents, such that whatever process constructs them (some alien evolution analogue?), it doesn’t teach them good decision theory at all. So when they figure out Tegmark IV and the possibility of acausal attacks/being simulation-captured, they give in to whatever “demands” are posed them. (I. e., there might be entire “worlds of dupes”, somewhere out there among the mathematically possible.)
That said, the “dupe” label actually does apply to a lot of humans, I think. I expect that a lot of people, if they ended up believing that they’re in a simulation and that the simulators would do bad things to them unless they do X, would do X. The acausal con men would only care to actually do it, however, if a given person is (1) in the position where they could do something with large-scale consequences, (2) smart enough to consider the possibility of simulation-capture, (3) not smart enough to ignore blackmail.
But: if that is the case, it’s because the entities designing/becoming powerful agents considered the possibility of con men manipulating the UP, and so made sure that they’re not just naively using the unfiltered (approximation of the) UP.
I’m not sure of this. It seems at least possible that we could get an equilibrium where everyone does use the unfiltered UP (in some part of their reasoning process), trusting that no one will manipulate them because (a) manipulative behavior is costly and (b) no one has any reason to expect anyone else will reason differently from them, so if you choose to manipulate someone else you’re effectively choosing that someone else will manipulate you.
Perhaps I’m misunderstanding you. I’m imagining something like choosing one’s one decision procedure in TDT, where one ends up choosing a procedure that involves “the unfiltered UP” somewhere, and which doesn’t do manipulation. (If your procedure involved manipulation, so would your copy’s procedure, and you would get manipulated; you don’t want this, so you don’t manipulate, nor does your copy.) But you write
the real difference is that the “dupe” is using causal decision theory, not functional decision theory
whereas it seems to me that TDT/FDT-style reasoning is precisely what allows us to “naively” trust the UP, here, without having to do the hard work of “filtering.” That is: this kind of reasoning tells us to behave so that the UP won’t be malign; hence, the UP isn’t malign; hence, we can “naively” trust it, as though it weren’t malign (because it isn’t).
More broadly, though—we are now talking about something that I feel like I basically understand and basically agree with, and just arguing over the details, which is very much not the case with standard presentations of the malignity argument. So, thanks for that.
I’m not sure of this. It seems at least possible that we could get an equilibrium where everyone does use the unfiltered UP (in some part of their reasoning process), trusting that no one will manipulate them because (a) manipulative behavior is costly and (b) no one has any reason to expect anyone else will reason differently from them, so if you choose to manipulate someone else you’re effectively choosing that someone else will manipulate you.
Yep.
Correction: on my model, the dupe is also using an approximation of the UP, not the UP itself. I. e., it doesn’t need to be uncomputable. The difference between it and the con men is just the naivety of the design. It generates guesses regarding what universes it’s most likely to be in (potentially using abstract reasoning), but then doesn’t “filter” these universes; doesn’t actually “look inside” and determine if it’s a good idea to use a specific universe as a model. It doesn’t consider the possibility of being manipulated through it; doesn’t consider the possibility that it contains daemons.
I. e.: the real difference is that the “dupe” is using causal decision theory, not functional decision theory.
I think that’s plausible: that there aren’t actually that many “UP-using dupes” in existence, so the con men don’t actually care to stage these acausal attacks.
But: if that is the case, it’s because the entities designing/becoming powerful agents considered the possibility of con men manipulating the UP, and so made sure that they’re not just naively using the unfiltered (approximation of the) UP.
That is: yes, it seems likely that the equilibrium state of affairs here is “nobody is actually messing with the UP”. But it’s because everyone knows the UP could be messed with in this manner, so no-one is using it (nor its computationally tractable approximations).
It might also not be the case, however. Maybe there are large swathes of reality populated by powerful yet naive agents, such that whatever process constructs them (some alien evolution analogue?), it doesn’t teach them good decision theory at all. So when they figure out Tegmark IV and the possibility of acausal attacks/being simulation-captured, they give in to whatever “demands” are posed them. (I. e., there might be entire “worlds of dupes”, somewhere out there among the mathematically possible.)
That said, the “dupe” label actually does apply to a lot of humans, I think. I expect that a lot of people, if they ended up believing that they’re in a simulation and that the simulators would do bad things to them unless they do X, would do X. The acausal con men would only care to actually do it, however, if a given person is (1) in the position where they could do something with large-scale consequences, (2) smart enough to consider the possibility of simulation-capture, (3) not smart enough to ignore blackmail.
Cool, it sounds we basically agree!
I’m not sure of this. It seems at least possible that we could get an equilibrium where everyone does use the unfiltered UP (in some part of their reasoning process), trusting that no one will manipulate them because (a) manipulative behavior is costly and (b) no one has any reason to expect anyone else will reason differently from them, so if you choose to manipulate someone else you’re effectively choosing that someone else will manipulate you.
Perhaps I’m misunderstanding you. I’m imagining something like choosing one’s one decision procedure in TDT, where one ends up choosing a procedure that involves “the unfiltered UP” somewhere, and which doesn’t do manipulation. (If your procedure involved manipulation, so would your copy’s procedure, and you would get manipulated; you don’t want this, so you don’t manipulate, nor does your copy.) But you write
whereas it seems to me that TDT/FDT-style reasoning is precisely what allows us to “naively” trust the UP, here, without having to do the hard work of “filtering.” That is: this kind of reasoning tells us to behave so that the UP won’t be malign; hence, the UP isn’t malign; hence, we can “naively” trust it, as though it weren’t malign (because it isn’t).
More broadly, though—we are now talking about something that I feel like I basically understand and basically agree with, and just arguing over the details, which is very much not the case with standard presentations of the malignity argument. So, thanks for that.
Fair point! I agree.