Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders).
I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark’s multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.
I’m not yet confident enough in any of this to say that I’ve “taken the red pill”, but since, to be honest, that originally felt like something I really really didn’t want to believe, I’ve been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I’m to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.
And even if I’m really only doing this because it feels good to me… well, then I’d still do it.
consequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological “desire to be a consequentialist even though it’s impossible” that just won’t go away. It is an ugly hack that ought to not work.
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Details, details. I don’t know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say “Hey, whoever’s simulating this realty, you’d better do x or we’ll simulate your reality and torture all of you!”, followed by us believing them, not realizing that it doesn’t work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there’s no way for people in different realities to threaten each other if they mutually understand that. If you’re simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you’re just switching to simulating a different structure. You can push the “torture” button, and you’ll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world?
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark’s multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.
I’m not yet confident enough in any of this to say that I’ve “taken the red pill”, but since, to be honest, that originally felt like something I really really didn’t want to believe, I’ve been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I’m to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.
And even if I’m really only doing this because it feels good to me… well, then I’d still do it.
consequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological “desire to be a consequentialist even though it’s impossible” that just won’t go away. It is an ugly hack that ought to not work.
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Could you elaborate on that?
We might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.
How? And how would we know if our threats were effective?
Details, details. I don’t know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
Ah, I see.
Having a ‘limited sphere of consequence’ is actually one of the core ideas of deontology (though of course they don’t put it quite like that).
Speaking for myself, although it does seem like an ugly hack, I can’t see any other way of escaping the paranoia of “Pascal’s Mugging”.
Well, one way is to have a bounded utility function. Then Pascal Mugging is not a problem.
Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Actually I think I’ll take that back. It depends on exactly how things play out.
Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say “Hey, whoever’s simulating this realty, you’d better do x or we’ll simulate your reality and torture all of you!”, followed by us believing them, not realizing that it doesn’t work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there’s no way for people in different realities to threaten each other if they mutually understand that. If you’re simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you’re just switching to simulating a different structure. You can push the “torture” button, and you’ll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
You can still change the measure of different continuations of a given universe.