I’m still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we’re in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or “scripted” (similar to Boltzmann brains), I’m beginning to worry that a fully fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
* E. T. Jaynes says we can’t do inference in infinite sets except those that are defined as well-behaved limits of finite sets, but if we’re living in an infinite set, then there has to be some right answer, and some best method of approximating it. I have no idea what that method is.
So. My moral intuition says that creating an identical non-interacting copy of me, with no need for or possibility of it serving as a backup, is valued at 0. As for consequentialism… if this were valued even slightly, I’d get one of those quantum random number generator dongles, have it generate my desktop wallpaper every few seconds (thereby constantly creating zillions of new slightly-different versions of my brain in their own Everett branches), and start raking in utilons. Considering that this seems not just emotionally neutral but useless to me, my consequentialism seems to agree with my emotivist intuition.
Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.
My argument for that is essentially structured as a dissolution of “existence”, an answer to the question “Why do I think I exist?” instead of “Why do I exist?”. Whatever facts are related to one’s feeling of existence — all the neurological processes that lead to one’s lips moving and saying “I think therefore I am”, and the physical processes underlying all of that — would still be true as subjunctive facts about a hypothetical mathematical structure. A brain doesn’t have some special existence-detector that goes off if it’s in the “real” universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.
This seems like a genuinely satisfying dissolution to me — “Why does anything exist?” honestly doesn’t feel intractably mysterious to me anymore — but even ignoring that argument and starting only with Occam’s Razor, the Level IV Multiverse is much more probable than this particular universe. Even so, specific rational evidence for it would be nice; I’m still working on figuring out what qualify as such.
There may be some. First, it would anthropically explain why this universe’s laws and constants appear to be well-suited to complex structures including observers. There doesn’t have to be any The Universe that happens to be fine-tuned for us; instead, tautologically, we only find ourselves existing in universes in which we can exist. Similarly, according to Tegmark, physical geometries with three non-compactified spatial dimensions and one time dimension are uniquely well-suited to observers, so we find ourselves in a structure with those qualities.
Anyway, yeah, I think there are some good reasons to believe (or at least investigate) it, plus some things that still confuse me (which I’ve mentioned elsewhere in this thread and in the last section of my post about it), including the aforementioned “infinite ethics problem of awesome magnitude”.
A brain doesn’t have some special existence-detector that goes off if it’s in the “real” universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.
This seems to lead to madness, unless you have some kind of measure over possible worlds. Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future (all possible continuations exist, and each action has all possible consequences).
Measure doesn’t help if each action has all possible consequences: you’d just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and “mathematically crisp” dependence between actions and consequences.
No, it could help because the measure could be attached to world-histories, so there is a measure for “(drop ball) leads to (ball to fall downwards)”, which is effectively the kind of thing our laws of physics do for us.
There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it’s not. (That is, I don’t necessarily disagree, but don’t want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
It’s not a generally valid solution (there are solutions that don’t use measures), though it’s a great solution for most purposes. It’s just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.
You said:
Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future
I guess the difference that is relevant here is that if it is false, then a “real” person generates subjective experience, but a possible person (or a possible person execution-history) doesn’t.
“Infinite ethics” is surely a non-problem for individuals—since an individual agent can only act locally. Things that are far away are outside the agent’s light cone.
This is an all-possible-worlds-exist philosophy. There are an infinite number of worlds where there are entities which are subjectively identical to you and cognitively similar enough that they will make the same decision you make, for the same reasons. When you make a choice, all those duplicates make the same choice, and there are consequences in an infinity of worlds. So there’s a fuzzy neoplatonic idea according to which you identify yourself with the whole equivalence class of subjective duplicates to which you belong.
But I believe there’s an illusion here and for every individual, the situation described actually reduces to an individual making a decision and not knowing which possible world they’re in. There is no sense in which the decision by any one individual actually causes decisions in other worlds. I postulate that there is no decision-theoretic advantage or moral imperative to indulging the neoplatonic perspective, and if you try to extract practical implications from it, you won’t be able to improve on the uncertain-single-world approach.
By hypothesis. There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is. As, I think, you go on to say.
I agree. I was paraphrasing what ata and Roko were talking about. I think it’s a hypothesis worth considering. There may be a level of enlightenment beyond which one sees that the hypothesis is definitely true, definitely false, definitely undecidable, or definitely irrelevant to decision-making, but I don’t know any of that yet.
There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is.
I think, again, that we don’t actually know any of that yet. Epistemically, there would appear to be infinitely many possibilities. It may be that a rational agent does need to acknowledge and deal with this fact somehow. For example, maximizing utility in this situation may require infinite sums or integrals of some form (the expected utility of an action being the sum, across all possible worlds, of its expected utility in each such world times the world’s apriori probability). Experience with halting probabilities suggests that such sums may be uncomputable, even supposing you can rationally decide on a model of possibility space and on a prior, and the best you can do may be some finite approximation. But ideally one would want to show that such finite methods really do approximate the unattainable infinite, and in this sense the agent would need to “bother with infinity”, in order to justify the rationality of its procedures.
As for evidence of infinities within this world, observationally we can only see a finite distance in space and time, but if the rationally preferred model of the world contains infinities, then there is such evidence. I see this as primarily a quantum gravity question and so it’s in the process of being answered (by the ongoing, mostly deductive examination of the various available models). If it turns out, let us say, that gravity and quantum mechanics imply string theory, and string theory implies eternal inflation, then you would have a temporal infinity implied by the finite physical evidence.
There’s no temporal infinity without spatial infinity (instead you typically get eternal return). There’s incredibly weak evidence for spatial infinity—since we can only see the nearest 13 billion light years—and that’s practiacally nothing—compared to infinity.
The situation is that we don’t know with much certainty whether the world is finite or infinite. However, if an ethical system suggests people behave very differently here and now depending on the outcome of such abstract metaphysicis, I think that ethical system is probably screwed.
fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
If you are feeling this, then you are waking up to moral antirealism. Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders). Looks like you’ve taken the red pill.
Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders).
I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark’s multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.
I’m not yet confident enough in any of this to say that I’ve “taken the red pill”, but since, to be honest, that originally felt like something I really really didn’t want to believe, I’ve been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I’m to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.
And even if I’m really only doing this because it feels good to me… well, then I’d still do it.
consequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological “desire to be a consequentialist even though it’s impossible” that just won’t go away. It is an ugly hack that ought to not work.
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Details, details. I don’t know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say “Hey, whoever’s simulating this realty, you’d better do x or we’ll simulate your reality and torture all of you!”, followed by us believing them, not realizing that it doesn’t work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there’s no way for people in different realities to threaten each other if they mutually understand that. If you’re simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you’re just switching to simulating a different structure. You can push the “torture” button, and you’ll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world?
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
I’m still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we’re in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or “scripted” (similar to Boltzmann brains), I’m beginning to worry that a fully fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
* E. T. Jaynes says we can’t do inference in infinite sets except those that are defined as well-behaved limits of finite sets, but if we’re living in an infinite set, then there has to be some right answer, and some best method of approximating it. I have no idea what that method is.
So. My moral intuition says that creating an identical non-interacting copy of me, with no need for or possibility of it serving as a backup, is valued at 0. As for consequentialism… if this were valued even slightly, I’d get one of those quantum random number generator dongles, have it generate my desktop wallpaper every few seconds (thereby constantly creating zillions of new slightly-different versions of my brain in their own Everett branches), and start raking in utilons. Considering that this seems not just emotionally neutral but useless to me, my consequentialism seems to agree with my emotivist intuition.
If this is in some sense true, then we have an infinite ethics problem of awesome magnitude.
Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.
My argument for that is essentially structured as a dissolution of “existence”, an answer to the question “Why do I think I exist?” instead of “Why do I exist?”. Whatever facts are related to one’s feeling of existence — all the neurological processes that lead to one’s lips moving and saying “I think therefore I am”, and the physical processes underlying all of that — would still be true as subjunctive facts about a hypothetical mathematical structure. A brain doesn’t have some special existence-detector that goes off if it’s in the “real” universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.
This seems like a genuinely satisfying dissolution to me — “Why does anything exist?” honestly doesn’t feel intractably mysterious to me anymore — but even ignoring that argument and starting only with Occam’s Razor, the Level IV Multiverse is much more probable than this particular universe. Even so, specific rational evidence for it would be nice; I’m still working on figuring out what qualify as such.
There may be some. First, it would anthropically explain why this universe’s laws and constants appear to be well-suited to complex structures including observers. There doesn’t have to be any The Universe that happens to be fine-tuned for us; instead, tautologically, we only find ourselves existing in universes in which we can exist. Similarly, according to Tegmark, physical geometries with three non-compactified spatial dimensions and one time dimension are uniquely well-suited to observers, so we find ourselves in a structure with those qualities.
Anyway, yeah, I think there are some good reasons to believe (or at least investigate) it, plus some things that still confuse me (which I’ve mentioned elsewhere in this thread and in the last section of my post about it), including the aforementioned “infinite ethics problem of awesome magnitude”.
This seems to lead to madness, unless you have some kind of measure over possible worlds. Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future (all possible continuations exist, and each action has all possible consequences).
Measure doesn’t help if each action has all possible consequences: you’d just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and “mathematically crisp” dependence between actions and consequences.
No, it could help because the measure could be attached to world-histories, so there is a measure for “(drop ball) leads to (ball to fall downwards)”, which is effectively the kind of thing our laws of physics do for us.
There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it’s not. (That is, I don’t necessarily disagree, but don’t want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
It’s not a generally valid solution (there are solutions that don’t use measures), though it’s a great solution for most purposes. It’s just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.
You said:
Ah, I see, sufficient but not necessary.
But smaller ensembles could also explain this, such as chaotic inflation and the string landscape.
I guess the difference that is relevant here is that if it is false, then a “real” person generates subjective experience, but a possible person (or a possible person execution-history) doesn’t.
“Infinite ethics” is surely a non-problem for individuals—since an individual agent can only act locally. Things that are far away are outside the agent’s light cone.
This is an all-possible-worlds-exist philosophy. There are an infinite number of worlds where there are entities which are subjectively identical to you and cognitively similar enough that they will make the same decision you make, for the same reasons. When you make a choice, all those duplicates make the same choice, and there are consequences in an infinity of worlds. So there’s a fuzzy neoplatonic idea according to which you identify yourself with the whole equivalence class of subjective duplicates to which you belong.
But I believe there’s an illusion here and for every individual, the situation described actually reduces to an individual making a decision and not knowing which possible world they’re in. There is no sense in which the decision by any one individual actually causes decisions in other worlds. I postulate that there is no decision-theoretic advantage or moral imperative to indulging the neoplatonic perspective, and if you try to extract practical implications from it, you won’t be able to improve on the uncertain-single-world approach.
Re: “There are an infinite number of worlds”
By hypothesis. There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is. As, I think, you go on to say.
I agree. I was paraphrasing what ata and Roko were talking about. I think it’s a hypothesis worth considering. There may be a level of enlightenment beyond which one sees that the hypothesis is definitely true, definitely false, definitely undecidable, or definitely irrelevant to decision-making, but I don’t know any of that yet.
I think, again, that we don’t actually know any of that yet. Epistemically, there would appear to be infinitely many possibilities. It may be that a rational agent does need to acknowledge and deal with this fact somehow. For example, maximizing utility in this situation may require infinite sums or integrals of some form (the expected utility of an action being the sum, across all possible worlds, of its expected utility in each such world times the world’s apriori probability). Experience with halting probabilities suggests that such sums may be uncomputable, even supposing you can rationally decide on a model of possibility space and on a prior, and the best you can do may be some finite approximation. But ideally one would want to show that such finite methods really do approximate the unattainable infinite, and in this sense the agent would need to “bother with infinity”, in order to justify the rationality of its procedures.
As for evidence of infinities within this world, observationally we can only see a finite distance in space and time, but if the rationally preferred model of the world contains infinities, then there is such evidence. I see this as primarily a quantum gravity question and so it’s in the process of being answered (by the ongoing, mostly deductive examination of the various available models). If it turns out, let us say, that gravity and quantum mechanics imply string theory, and string theory implies eternal inflation, then you would have a temporal infinity implied by the finite physical evidence.
There’s no temporal infinity without spatial infinity (instead you typically get eternal return). There’s incredibly weak evidence for spatial infinity—since we can only see the nearest 13 billion light years—and that’s practiacally nothing—compared to infinity.
The situation is that we don’t know with much certainty whether the world is finite or infinite. However, if an ethical system suggests people behave very differently here and now depending on the outcome of such abstract metaphysicis, I think that ethical system is probably screwed.
That is something the MP’s preceding sentence seems to indicate.
If you are feeling this, then you are waking up to moral antirealism. Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders). Looks like you’ve taken the red pill.
I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark’s multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.
I’m not yet confident enough in any of this to say that I’ve “taken the red pill”, but since, to be honest, that originally felt like something I really really didn’t want to believe, I’ve been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I’m to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.
And even if I’m really only doing this because it feels good to me… well, then I’d still do it.
consequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological “desire to be a consequentialist even though it’s impossible” that just won’t go away. It is an ugly hack that ought to not work.
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Could you elaborate on that?
We might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.
How? And how would we know if our threats were effective?
Details, details. I don’t know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
Ah, I see.
Having a ‘limited sphere of consequence’ is actually one of the core ideas of deontology (though of course they don’t put it quite like that).
Speaking for myself, although it does seem like an ugly hack, I can’t see any other way of escaping the paranoia of “Pascal’s Mugging”.
Well, one way is to have a bounded utility function. Then Pascal Mugging is not a problem.
Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Actually I think I’ll take that back. It depends on exactly how things play out.
Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say “Hey, whoever’s simulating this realty, you’d better do x or we’ll simulate your reality and torture all of you!”, followed by us believing them, not realizing that it doesn’t work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there’s no way for people in different realities to threaten each other if they mutually understand that. If you’re simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you’re just switching to simulating a different structure. You can push the “torture” button, and you’ll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
You don’t grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a “universal log program”, for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn’t take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don’t take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
OK, so you’re saying that A, a human in ‘the real world’, acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the ‘output log’ of each depends on the ‘Platonic’ result of a common computation—in this case the computation where A’s brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that ‘Platonic’ computation.
Now if you identify “yourself” with the abstract computation then you can say that “you” are controlling both the world and P. But then aren’t you an ‘inhabitant’ of P just as much as you’re an inhabitant of the world? On the other hand, if you specifically identify “yourself” with a particular chunk of “the real world” then it seems a bit misleading to say that “you” ambiently control P, given that “you” are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a ‘semantic quibble’ but in any case I can’t see how ambient control gets us any nearer to being able to say that we can threaten ‘parallel worlds’ causally disjoint from “the real world”, or receive responses or threats in return.
Sure, you can read it this way, but keep in mind that P is very simple, doesn’t have you as explicit “part”, and you’d need to work hard to find the way in which you control its output (find a dependence). This dependence doesn’t have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control “your own” world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of “your own world” specifies you as part explicitly, while to “find yourself” in a “causally unconnected world”, you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won’t allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most “causally unconnected” worlds: have them analyze P.
When a world program isn’t presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
You can still change the measure of different continuations of a given universe.