For The People Who Are Still Alive
Max Tegmark observed that we have three independent reasons to believe we live in a Big World: A universe which is large relative to the space of possibilities. For example, on current physics, the universe appears to be spatially infinite (though I’m not clear on how strongly this is implied by the standard model).
If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you. If you’re looking for an exact duplicate of a Hubble volume—an object the size of our observable universe—then you should still on average only need to look 10^10^115 lightyears. (These are numbers based on a highly conservative counting of “physically possible” states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)
The most popular cosmological theories also call for an “inflationary” scenario in which many different universes would be eternally budding off, our own universe being only one bud. And finally there are the alternative decoherent branches of the grand quantum distribution, aka “many worlds”, whose presence is unambiguously implied by the simplest mathematics that fits our quantum experiments.
Ever since I realized that physics seems to tell us straight out that we live in a Big World, I’ve become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.
If your decision to not create a person means that person will never exist at all, then you might, indeed, be moved to create them, for their sakes. But if you’re just deciding whether or not to create a new person here, in your own Hubble volume and Everett branch, then it may make sense to have relatively lower populations within each causal volume, living higher qualities of life. It’s not like anyone will actually fail to be born on account of that decision—they’ll just be born predominantly into regions with higher standards of living.
Am I sure that this statement, that I have just emitted, actually makes sense?
Not really. It dabbles in the dark arts of anthropics, and the Dark Arts don’t get much murkier than that. Or to say it without the chaotic inversion: I am stupid with respect to anthropics.
But to apply the test of simplifiability—it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to “ensure they get born”.
Imagine taking a survey of the whole universe. Every plausible baby gets a little checkmark in the “exists” box—everyone is born somewhere. In fact, the total population count for each baby is something-or-other, some large number that may or may not be “infinite” -
(I should mention at this point that I am an infinite set atheist, and my main hope for being able to maintain this in the face of a spatially infinite universe is to suggest that identical Hubble volumes add in the same way as any other identical configuration of particles. So in this case the universe would be exponentially large, the size of the branched decoherent distribution, but the spatial infinity would just fold into that very large but finite object. And I could still be an infinite set atheist. I am not a physicist so my fond hope may be ruled out for some reason of which I am not aware.)
- so the first question, anthropically speaking, is whether multiple realizations of the exact same physical process count as more than one person. Let’s say you’ve got an upload running on a computer. If you look inside the computer and realize that it contains triply redundant processors running in exact synchrony, is that three people or one person? How about if the processor is a flat sheet—if that sheet is twice as thick, is there twice as much person inside it? If we split the sheet and put it back together again without desynchronizing it, have we created a person and killed them?
I suppose the answer could be yes; I have confessed myself stupid about anthropics.
Still: I, as I sit here, am frantically branching into exponentially vast numbers of quantum worlds. I’ve come to terms with that. It all adds up to normality, after all.
But I don’t see myself as having a little utility counter that frantically increases at an exponential rate, just from my sitting here and splitting. The thought of splitting at a faster rate does not much appeal to me, even if such a thing could be arranged.
What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy. This is the “probability” of a good outcome in my expected utility maximization. I’m not concerned with having more of me—really, there are plenty of me already—but I do want most of me to be having fun.
I’m not sure whether or not there exists an imperative for moral civilizations to try to create lots of happy people so as to ensure that most babies born will be happy. But suppose that you started off with 1 baby existing in unhappy regions for every 999 babies existing in happy regions. Would it make sense for the happy regions to create ten times as many babies leading one-tenth the quality of life, so that the universe was “99.99% sorta happy and 0.01% unhappy” instead of “99.9% really happy and 0.1% unhappy”? On the face of it, I’d have to answer “No.” (Though it depends on how unhappy the unhappy regions are; and if we start off with the universe mostly unhappy, well, that’s a pretty unpleasant possibility...)
But on the whole, it looks to me like if we decide to implement a policy of routinely killing off citizens to replace them with happier babies, or if we lower standards of living to create more people, then we aren’t giving the “gift of existence” to babies who wouldn’t otherwise have it. We’re just setting up the universe to contain the same babies, born predominantly into regions where they lead short lifespans not containing much happiness.
Once someone has been born into your Hubble volume and your Everett branch, you can’t undo that; it becomes the responsibility of your region of existence to give them a happy future. You can’t hand them back by killing them. That just makes their average lifespan shorter.
It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.
And that’s why, when there is research to be done, I do it not just for all the future babies who will be born—but, yes, for the people who already exist in our local region, who are already our responsibility.
For the good of all of us, except the ones who are dead.
- Bayesian Adjustment Does Not Defeat Existential Risk Charity by 17 Mar 2013 8:50 UTC; 81 points) (
- The Lifespan Dilemma by 10 Sep 2009 18:45 UTC; 61 points) (
- Amputation of Destiny by 29 Dec 2008 18:00 UTC; 45 points) (
- Continuous Improvement by 11 Jan 2009 2:09 UTC; 29 points) (
- 5 Apr 2012 0:12 UTC; 26 points) 's comment on Cryonics without freezers: resurrection possibilities in a Big World by (
- Dying in Many Worlds by 21 Dec 2012 11:48 UTC; 25 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- The Creating Bob the Jerk problem. Is it a real problem in decision theory? by 12 Jun 2012 21:36 UTC; 14 points) (
- 21 Jan 2010 20:13 UTC; 12 points) 's comment on That Magical Click by (
- 30 May 2012 23:55 UTC; 7 points) 's comment on One possible issue with radically increased lifespan by (
- [SEQ RERUN] For The People Who Are Still Alive by 3 Jan 2013 6:52 UTC; 5 points) (
- 14 Jan 2012 6:17 UTC; 4 points) 's comment on Can the Chain Still Hold You? by (
- 6 Apr 2009 23:12 UTC; 4 points) 's comment on Average utilitarianism must be correct? by (
- 14 Jan 2014 21:26 UTC; 3 points) 's comment on Stupid Questions Thread—January 2014 by (
- 5 Dec 2012 0:31 UTC; 2 points) 's comment on Is Equality Really about Diminishing Marginal Utility? by (
- 13 Jan 2014 7:34 UTC; 1 point) 's comment on Stupid Questions Thread—January 2014 by (
- 1 Dec 2011 21:51 UTC; 1 point) 's comment on Life Extension versus Replacement by (
- 26 Jan 2010 13:30 UTC; 1 point) 's comment on Welcome to Heaven by (
- 23 Jun 2010 21:08 UTC; 1 point) 's comment on Poll: What value extra copies? by (
- 27 Jul 2012 22:16 UTC; 1 point) 's comment on The Mere Cable Channel Addition Paradox by (
- 6 Apr 2009 23:09 UTC; 1 point) 's comment on Average utilitarianism must be correct? by (
- 13 Mar 2012 20:07 UTC; 0 points) 's comment on Real-life expected utility maximization [response to XiXiDu] by (
- 9 Apr 2009 2:29 UTC; 0 points) 's comment on Rationality, Cryonics and Pascal’s Wager by (
- 5 Nov 2012 11:18 UTC; 0 points) 's comment on [link] SMBC on utilitarianism and vegatarianism. by (
- 15 Aug 2013 19:30 UTC; 0 points) 's comment on MWI, copies and probability by (
- 27 Jul 2012 20:36 UTC; -1 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- 29 Jun 2012 5:39 UTC; -1 points) 's comment on A (small) critique of total utilitarianism by (
- 15 Oct 2020 9:45 UTC; -6 points) 's comment on Industrial literacy by (
I’m completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?
If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?
Also “standard model” doesn’t mean what you think it means and “unpleasant possibility” isn’t an argument.
the most important adaptation an ideology can make to improve its inclusive fitness for consumption by the human brain is to
refrain from making falsifiable claims
convince its followers to aggressively expand
1 is accomplished by making the ideology rest on a priori claims. everything that rests on top of that claim can be perfectly logical given the premise. since most people don’t examine their beliefs axiomatically, few will question the premise as long as they are provided the bare minimum of comfort. 2 is accomplished by activating the “morally righteous” centers of the brain. We’re not aggressively expanding, we’re bringing democracy/communism/god/whatever to the heathens.
Having a high standard of living seems incompatible with natural selection. Like sadness and pain leading to greater inclusive fitness in an individual, devoting more resources to expansion increases the inclusive fitness of any social system. Those who don’t expand are swallowed by those who do. It only takes one aggressively expansionist civilization per hubble volume to wipe out all other forms of civilization.
The data you point to only seem to suggest the universe is large; how do they also suggest it “is large relative to the space of physical possibilities”? The likelihood ratio seems pretty close as far as I can see.
With steven, I don’t see how, on your account, any of your actions can in fact effect the “proportion of my future selves to lead eudaimonic existences”. If people in your past couldn’t effect the total chance of your existing, how is it that you can effect the total chance of any particular future you existing? And how can there be a differing relative chance if the total chances all stay constant?
Thanks for the Portal reference. That was great.
Steven, I call the little continuous dials the “amount of reality-fluid” to remind myself of how confused I am.
“Unpleasant possibility” isn’t an argument but I didn’t feel like going into the rather complex issues involved (probability of UnFriendly AI running ancestor simulations, how many of them, versus probability of Friendly AI, versus probability of hitting the Unhappy Valley with a near-miss FAI or a meddling-dabbler AGI trained on smiling faces, versus probability of inhuman aliens creating minds that we care about, plus going into the issues of QTI).
Nazgul, you can act swiftly to capture all resources in your immediate vicinity regardless of whether you plan to share them out among few or many individuals.
Robin, spatial infinity would definitely be large relative to the volume of physical possibilities (infinite versus finite). With many-worlds and a mangling cutoff… then not every physical possibility would be realized, but I would expect most possible babies would be. All the babies worth making could be duplicated many times over among the Everett branches of all moral civilizations, even if any given branch kept their populations low and living standards high. Does it look different to you?
Most of the concepts here are ethical. Whether some contraption has the same personal identity as you do, and whether it’s good to have that contraption copied/destroyed, is a moral question, in a case when the unnatural concept of what’s right gets extended to very strange situations. Whether we cut this question in terms of personal identity or patterns of elementary particles is a matter of cognitive algorithm used to determine the decision. It doesn’t matter whether an upload is called “the same person” as its biological preimage, it only matters whether it’s a good decision to make this change. In our environment, the only analogous decision is to whether you leave a person alive, maybe trading a life for something greater. When trying to extend a concept of personal identity itself, people make a mistake of trying to extend its instrumental value along with it, but this value breaks along with the concept when context becomes sufficiently unnatural.
Our morality evolved without taking into account the fine properties of physical world, and so at least moral decisions drawn in context so close to our evolutionary environment that all the classical hallucinations still hold, shouldn’t require those properties of physical world to be taken into account. The decision to determine more average people vs. less happier people shouldn’t be justified in terms of many-worlds, it should be justified just as well in terms of out cognitive architecture without breaking out from the framework of classical hallucinations.
Eliezer, our data only show that the universe looks pretty flat, not that it is exactly flat. And it could be finite and exactly flat with a non-trivial topology. On if all babies are duplicated in MWI, it seems to depends on exactly what part of the local physical state is required to be the same.
Vladimir, many of these anthropic-sounding questions can also translate directly into “What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?” If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question “What should I expect to see happen next?” or, even worse, “Why am I seeing something so orderly rather than chaotic?” For example, saying “I only care about people in orderly situations” does not cut it as an explanation—it doesn’t seem like a question that I could answer with a utility function.
I have not been able to dissolve “the amount of reality-fluid” without also dissolving my belief that most people-weight is in ordered universes and that most of my futures are in ordered universes, without which I have no explanation for why I find myself in an ordered universe and no expectation of a future that is ordered as well.
In particular, I have not been able to dissolve reality-fluid into my utility function without concluding that, by virtue of caring only about copies of me who win the lottery, I could expect to win the lottery and actually see that as a result.
Robin, the disjunctive support in favor of a Big World is strong enough that I’m willing to call it pretty much a done deal at this point—the strongest pillar being MWI. With regards to MWI, I would suggest that the number of decoherent regions of the configuration space would be vastly larger than the space of possibilities for neurons firing or not firing.
Eliezer, I don’t think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question “to what degree is a structure (e.g. a mind-history) implemented in the universe” and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it’s much easier to specify an ordered universe and then locate a mind within it, than it is to specify a mind from scratch. If this commits me to believing funny stuff like people with arrows pointing at them are more alive than people not with arrows pointing at them, I’m inclined to say “so be it”.
and where I just said “universe” I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.
I was going to make about the same objection steven makes—if you take this stuff (MWI, anthropic principle, large universes) seriously as a guide to practical, everyday ethical decision-making, it seems to lead inexorably to nihilism—no decision you make matters very much. That doesn’t sound at all desireable, so my instinct is to suspect that there is something wrong either with the physics ideas, or (more likely) with the way they are being applied. But maybe not! Maybe nihilism is valid, but then why are we bothering to be rational or to do anything at all?
Scott Aaronson’s objections might carry more weight:
mtraven, Why we are “bothering to be rational or to do anything at all” (rather than being nihilists), if nihilism seems likely to be valid? Well, as long as there is a chance, say, only a .0000000000000001 chance, that nihilism is invalid, there is nothing to lose and possibly something to gain from assuming that nihilism is invalid. This refutes nihilism completely as a serious alternative.
I think basically the same is true about Yudkowsky’s fear that there are infinitely many copies of each person. Even if there is only a .0000000000000001 chance that there are only finitely many copies of each of us, we should assume that that is the case, since that is the only type of scenario where there can be anything to gain or lose, and thus the only possible type of scenario that might be a good idea to assume to be the case.
That is, given the assumption that one cannot affect infinite amounts by adding, no matter how much one adds. To this, I am an agnostic, if not an atheist. For example, adding an infinite amount A to an infinite amount A can, I think, make 2A rather than 1A. Ask yourself which you would prefer: 1) Being happy one day per year and suffer the rest of the time of each year, for an infinite number of years, or 2) The other way around? Would you really not care which of these two would happen?
You would. Note that this is the case even when you realize that a year is only finitely more than a day, meaning that each of alternatives 1 and 2 would give you infinitely much happiness and infinitely much suffering. This strongly suggests that adding an infinite amount A to an infinite amount A produces more than A. Then why wouldn’t also adding a finite amount B to an infinite amount A produce more than A? I would actually suggest that, even given classical utilitarianism, my life would not be worthless just because there are infinitely much happiness and infinitely much suffering in the world with or without me. Each person’s finite amount of happiness must be of some value regardless of the existence of infinite amounts of happiness elsewhere. I find this to be plausible, because were it not for precisely the individual finite beings with their finite amounts of happiness each, there would be no infinite sums of happiness in the universe. If every single one of all of the universe’s infinitely many, each finitely happy, being’s happiness were worthless, the infinite sum of their happiness would have to be worthless too. And that an infinite sum of happiness would be worthless is simply too ridiculous a thought to be taken seriously—given that anything at all is to be regarded valuable, an assumption I concluded valid in the beginning of this post.
“It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.”
This doesn’t make sense to me. A superintelligence could:
A superintelligence could create a semi-random plausible human brain emulation de novo, and whatever this emulation was, it would be the continuation of some set of human lives.
A superintelligence could conduct simulations to explore the likely distribution of minds across the multiverse, as well as the degree to which emulations continuing their lives (in desirable fashions) would serve its altruistic goals. Vast numbers of copies could then be run accordingly, and the costs of exploratory simulation would be negligible by comparison, so there would be little advantage to continuing the lives of beings within our causal region as opposed to entities discovered in exploratory simulation.
If we’re only concerned about proportions within ‘extended-beings,’ then there’s more bang for the buck in running emulations of rare and exotic beings (fewer emulations are required to change their proportions). The mere fact that we find current people to exist suggests anthropically that they are relatively common (and thus that it’s expensive to change their proportions) , so current local people would actually be neglected almost entirely by your sort of Big World average utilitarian.
Carl, that assumes QTI, i.e., no subjective conditional probability ever contains a Death event. Things do get strange then.
Eliezer: I’m not sure you’d really get much interference effects between indistinguishable hubble volumes.
What I mean is you’d need some event that has in its causal history stuff from two “equivalent” hubble volumes, right?
Otherwise, well, how would any nontrivial interference effects related to the indistinguishability between multiple hubble volumes manifest? Configuration space isn’t over the hubble volumes but over the entirety of the universe, right?
I still see no adequate answer to the question of how you can change P(A|B) if you can’t change P(A) or P(B). If every possible mind exists somewhere, and if all that matters about a mind is that it exists somewhere, then no actions make any difference to what matters.
The idea is that you can’t change whether a mind exists but you can, possibly, change how much of it exists, or perhaps, how much of different futures it has. By multiply instantiating it? I guess so. It doesn’t seem to make much sense, but if I don’t presume something like this, I have to weight Boltzmann brains the same as myself.
I’m not trying to rest this argument on the details of the anthropics. Something more along the lines of—in a Big World, I don’t have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients. If we create a comfortable number of diverse people with high standards of living in our own Everett branch, we can rely on other diverse people being realized elsewhere.
I have confessed my own confusion about anthropics; I do not at present have any non-paradoxical visualization of this problem in hand. Still—in a Big World, it sounds a little more okay to have fewer people locally with a higher quality of life; do you see the intuitive appeal?
We’re not talking about “few people” in any absolute sense; there’s six billion of us already. But say that, as we spread across galaxies, that number goes up to six quadrillion (10^15) instead of six decillion (10^30) and everyone has 10^15 times the standard of living, or however that scales.
When the vast majority of orders of magnitude in the diversity of realized possibilities, 10^something orders of magnitude, come from quantum branching, isn’t it okay to just take fifteen orders of magnitude for the standard of living improvement?
I don’t buy the idea of Everet branching from at least this reason:
Let say, that in an experiment, a parallel Universe is created with the probability 1⁄2. In some Universes this experiment will be continued and parallel Universes will be created, in some will not.
Question. Is the parallel Universe of the parallel Universe our parallel Universe? Sometimes not.
So, we have the parallel and the semi-parallel worlds. And so on.
Eliezer, it seems you are just expressing the usual intuition against the the “repugnant conclusion”, that as long as the universe has a lot more creatures than are on Earth now, having even more creatures can’t be very important relative to each one’s quality of life.
But in technical terms if you can talk about how much of a mind exists, and can promote more of one kind of mind relative to another, then you can talk about how much they all exist, and can want to promote more minds existing to a larger degree.
Well, this is morality we’re talking about, right? So in that case we should ask ourselves what we want.
Let’s say that there are already 10^10^20 people out there, and you’re suddenly blessed with a thousand times the resources. Would you rather have 10^(10^20 + 3) people in existence, or raise the standard of living by a factor of a thousand?
To look at it another way, let’s say that you recently glanced up out of the corner of your eye and saw a dust speck. I have a thousand units of resource. Would you prefer that I simulate a thousand different versions of Robin who saw the dust speck in slightly different locations in a 10 x 10 x 10 grid, or would you rather have a thousand times as much money?
For me, the value of creating new existences is linked to their diversity; as you create more people, you run out of diversity, and so it becomes more important to create the best people rather than to create new people.
Suppose that Earth were the only planet, the only branch, and the only region in all of existence. Then we might want to have mathematicians share all possible developments with each other, in order to prevent them from duplicating each other’s work and let them prove as many new theorems as possible; because if someone here doesn’t prove a theorem, no one will ever know that theorem ever.
But if there are zillions of Earths, then mathematicians may want not to peek at spoilers, saving the joy of discovering especially fun theorems for themselves—they will concentrate on individually experiencing the highest-quality theorems, rather than trying to cover as much space as possible as a group.
“So in that case we should ask ourselves what we want.”
Eliezer,
The standard problem is that people have incoherent preferences over various population scenarios. They prefer to substantially increase the population in exchange for a small change in QOL, but they reject the result of many such tradeoffs in sequence. Critical-level views, or ones that weight both QOL and total independently, all fail at resolution.
Carl is right; this is a minefield in terms of misleading intuitions. Also, there is already a substantial philosophy literature dealing with it; best to start with what they’ve learned.
Eliezer:
I currently think a subjective point of view should be assumed only for a single decision, all the semantics preconfigured in the utility maximizer that makes the decision. No continuity of experience enters this picture, if agent operates continuously, it’s just a sequence of utility maximizer configurations, which are to be determined from each of the decision points to hold the best beliefs, and generally any kind of cognitive features (if it’s a sequence, then certain kinds of cognitive rituals become efficient). So, there is no future “me”, future “me” is a decision point that needs to be determined according to preferences of current decision, and it might be that there is no future “me” planned at all. This reduces expectation to both utility and probability, as you both have uncertain knowledge about your future version, and value associated with its possible states. So, you don’t plan to see something chaotic because you don’t predict something chaotic to happen. You predict the future to be ordered, and you are configured to know the environment to be ordered. An Occam’s razor-like prior is expected to converge on a true distribution, whatever that is, and so, being a general predictor, you weight possibilities this way. You can’t actually see that result, you may only expect your future state to see that result. If there is a point in preparing to winning/losing the lottery, and you only care about winning (that is, in case you don’t win, anything you’ve done won’t matter), you’ll make preparations for the winning option regardless of your chances, that is you’ll act as if you expect to win. If you include your thoughts, probability distribution and utility, in the domain of decisions, you might as well reconfigure yourself to believe that you’ll most certainly win. Not a realistically plausible situation, and changes the semantics of truth in representation, and hence counterintuitive, but delivers the same win.I’m familiar with Parfit’s Repugnant Conclusion, and was actually planning to do a post on it at some point or another, because I took one look and said “Isn’t that just scope insensitivity?” But I also automatically translated the problem into Small World terms so that new people were actually being brought into existence; and, in retrospect, even then, visualized it in terms of a number of people small enough that they could have reasonably unique experiences (that is, not a thousand copies of Robin Hanson looking at a dust speck in slightly different places).
With those provisos in place, the Repugnant Conclusion is straightforwardly “repugnant” only because of scope insensitivity. By specification, each new birth is something to celebrate rather than to regret—it can’t be an existence just marginally good enough to avoid mercy-killing after being born, with the disutility of the death taken into account. It has to be an existence containing enough joys to outweigh any sorrows, so that we celebrate its birth. If each new birth is something to celebrate, then the “repugnance” of the Repugnant Conclusion is just because we’re tossing the thousandfold multiplier of a thousand celebrations out the window.
But if there are diminishing moral returns on diversity, or if people already exist and we’re allocating reality-fluid among them, then you can “shut up and calculate” and find that you shouldn’t create new low-quality people; and then the Repugnant Conclusion fails because each additional birth is not something to celebrate.
By saying “we should ask ourselves what we want”, I didn’t mean to imply that we could trust the answers. But I don’t think that my own answer leads to a preference reversal (in the absence of anthropic paradoxes, where I don’t know what to expect to see either). If I’ve missed the reversal, by all means point it out.
I’m just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It’s nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that “it all adds up to normality” involved here, even when it’s not clear what ‘normality’ means. When one doesn’t decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I’m also not at all sure how certain we should be of a big universe, but personally I don’t feel very confident of it. I’d say it’s the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.
Good lives versus many lifeforms? Yes please.
I confessed myself confused! Really, I did! But even being confused, I’ve got to update as best I can. In a sufficiently large universe, I care more about better lives and less about creating more people. Is that really so complicated?
You might be interested in the last section of Motion Mountain, the free online physics textbook. It presents absolute limits for various measures of the universe, derived from quantum mechanics and general relativity. It appears that we live in a finite universe, though all of this stuff is pretty speculative.
I find it suspicious that people’s preferences over population, lifespan, standard of living, and diversity seem to be “kinked” near their familiar world. A world with 1% of the population, standard of living, lifespan, or diversity of their own world seems to most a terrible travesty, almost a horror, while a world with 100 times as much of one of these factors seems to them at most a small gain, hardly worth mentioning. I suspect a serious status quo bias.
Couldn’t this argument cut the other way? Maybe the only reason we think a small population with an average utility of 100 is worse than a billion people with an average utility of 99 is that we’re “kinked” to a world inhabited by billions.
Personally, when I read “The City and the Stars,” which takes place on a very sparsely populated future Earth, I agreed with the author that it was a bad thing that the local population was less ambitious and curious than the humans of the past. But I did not think it was a horrible travesty that there were so few people. I assume that for the duration of my reading I empathized with the inhabitants, and hence found their current population levels desirable. I’ve noticed the same thing when reading other books set in sparsely populated settings. I wish the inhabitants were better off, but don’t think there need to be more of them.
A typical argument against “quality” focused population ethics is that they favor much smaller populations with higher qualities of life than we currently have, while an argument against “quantity” focused population ethics is that they favor much larger populations with lower qualities of life than we currently have. Both of these seem counter-intuitive, but which intuition should be kept and which should be rejected? Considering that our moral intuitions developed in small hunter gatherer bands, I wouldn’t be surprised if the quality focused population ethics was actually the correct one.
… huh. I started to disagree with you, and found all the examples I came up with didn’t actually seem that bad—up to and including a lone loner roaming an empty universe.
On the other hand, they do seem a bit … dull? Lacking the sort of explosive variety I picture in the Good Future.
I agree, I think that the reason that sparsely populated scenarios seem repugnant to us isn’t because we want to maximize total utility, and they have a lower total utility level. Rather it’s because we value things like diversity, friendship, love, and interpersonal entanglements, and we find the idea of a future where these things do not exist to be repugnant.
One argument hardcore total utilitarians use to claim people have inconsistent preferences about population ethics is that when ranking the following populations:
A) Ten billion people with ten thousand utility each, for a total utility of 100 trillion. B) 200 trillion people with one utility each, for a total utility of 200 trillion. C) One utility monster with 50 trillion utility.
People consider A to be better than both B and C. “Aha!” cry the total utilitarians. “So in one scenario utility is too heavily concentrated, and in another it isn’t concentrated enough! Intransitive preferences! Status quo bias!”
What the hardcore total utilitarians fail to realize is that the reason people find C repugnant isn’t because utility is heavily concentrated, it’s that in order to have such high utility when it is the lone being in the universe, the utility monster must place no value at all on diversity, friendship, love, and interpersonal entanglements, and so forth. C isn’t repugnant because utility is too concentrated, or because of status quo bias, it’s repugnant because the lone inhabitant of C lacks a large portion of the gifts we give to tomorrow.
To test this theory I decided to compare populations A, B, and C again, with the stipulation that the multitude inhabiting of A and B were all hermits who never saw each other, and instead of diverse individuals they were repeated genetic duplicates of the same person. Sure enough I found all three populations repugnant. But I might have found C to be a little less repugnant than A and B.
It’s possible I’m more of a loner than you, so I find the idea of hermits less repugnant.
On the other hand, clones tend to really mess up my intuitions regardless of the hypothetical. I’m pretty sure they should be penalized for lacking diversity, but as for the actual amount …
EDIT: also, be careful you’re not imagining these hermits not doing anything fun. Agents getting utility from things we don’t value is a surefire way to suck the worth out of a number.
Maybe I was using too strong a word when I said I found it “repugnant.”
I took your advice and tried to imagine the hermits doing things I like doing when I am alone. That was hard at first, since most of the things I like doing alone still require some other personat some point (reading a book requires an author, for instance). But imagining a hermit studying nature, interacting with plants and animal (the animals obviously have to be bugs and other nonsapient, nonsentient animals to preserve the purity of the scenario, but that’s fine with me), doing science experiments, etc, that doesn’t seem repugnant at all.
But I still prefer, or am indifferent to, one utility monster hermit vs. many normal hermits, especially if the hermits are all clones living in very similar environments.
I’m not sure how much I value diversity that isn’t appreciated. I think I’d prefer a diverse group of hermits to a nondiverse group, but the fact that the hermits never meet and are unable to appreciate each others diversity seems to make it less valuable to me, the same way a painting that’s locked in a room where no one will ever see it is less valuable. That may come back to my belief that value usually needs both an objective and subjective component. On the other hand I might value diversity terminally as well, as I said the fact that no one appreciated the hermit’s diversity made it less valuable to me, but not valueless.
Robin,
Some brute preferences and values may be inculcated by connected social processes. Social psychology seems to point to flexible moral learning among young people (e.g. developing strong moral feelings about ritual purity as one’s culture defines it through early exposure to adults reacting in the prescribed ways). Sexual psychology seems to show similar effects: there is a dizzying variety of learned sexual fetishes, and they tend to be culturally laden and connected to the experiences of today, but that doesn’t make them wrong. Moral education dedicated to upholding the status quo may create real preferences for that status quo, (in addition to the bias you mention, not in place of it) in a ‘moral miracle’ but not a physical one:
http://lesswrong.com/lw/sa/the_gift_we_give_to_tomorrow/
Be honest, how many of you finished the Portal Song at the end of this post?
Robin, I think I’m being consistent in caring about lifespan, standard of living, and diversity while not caring about population. (Diversity will look like concern for population but it will run into diminishing returns; still, if our Earth were the only civilization, then indeed there would be lots of experiences as-yet unrealized and the diversity motive would be strong. In other words, I’d consistently want a hundred times as much diversity as what we see in the immediate world around us.)
Suppose that instead of talking about people, we were just talking about music or theorems.
It seems to me that a lot of what I have to say on this subject carries right over—that if there’s very little music or math already, then we are concerned about creating more of it so that experiences don’t go unrealized. But if the space is already well-covered up to the granularity at which we care about diversity (which is less than tiny variations in note length) then we are more interested in hearing the best music, and less interested in hearing new music.
Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.
Since I probably don’t care about abstract existence of music, but about experiencing music, this is correct for music for the wrong reasons, namely limited attention bandwidth. Analogy seduces, but doesn’t seem to carry over...
in a Big World, I don’t have to worry as much about creating diversity or giving possibilities a chance to exist, relative to how much I worry about average quality of life for sentients.
Can’t say fairer than that.
Eliezer, given the proportion of your selves that get run over every day, have you stopped crossing the road? Leaving the house?
Or do you just make sure that you improve the standard of living for everyone in your Hubble Sphere by a certain number of utilons and call it a good day on average?
Eliezer, you know perfectly well that the theory you are suggesting here leads to circular preferences. On another occasion when this came up, I started to indicate the path that would show this, and you did not respond. If circular preferences are justified on the grounds that you are confused, then you are justifying those who said that dust specks are preferable to torture.
it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to “ensure they get born”.
That’s an interesting intuition, but one that I don’t share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian “Each to count for one and none for more than one” principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the number of configurations of atoms in states producing conscious happiness and reducing those producing conscious suffering—hence regular total utilitarianism. (Of course, figuring out what it means to “increase” and “reduce” things that occur infinitely many times is another matter.)
I’m finding Eliezer’s view attractive, but it does have a few counterintuitive consequences of its own. If we somehow encountered shocking new evidence that MWI, &c. is false and that we live in a small world, would weird people suddenly become much more important? Did Eliezer think (or should he have thought) that weird people are more important before coming to believe in a big world?
I think many value the quality of life of their friends and loved ones more than they value hypothetical far-future abstractions. This has to do with evolution’s impact on psychology—and doesn’t have much to do with how big the universe is.
Eliezer, whenever you start thinking about people who are completely causally unconnected with us as morally relevant, alarm bells should go off.
What’s worse though, is that if your opinion on this is driven by a desire to justify not agreeing with the “repugnant conclusion”, it may signify problems with your morality that could annihilate humanity if you give your morality to an AI. The repugnant conclusion requires valuing the bringing into existence of hypothetical people with total utility x by as much as reducing the utility of existing people by x, or annihilating people with utility x. Give that morality to a fast takeoff AI and they’ll quickly replace all humans with entities with greater capacity for utility. If the AI is programmed to believe the problem with the “repugnant conclusion” is what you claim, the AI will instead create randomized (for high uniqueness) minds with high capacity for utility, still annihilating humans.
Eliezer, also consider this: suppose I am a mad scientist trying to decide between making one copy of Eliezer and torturing it for 50 years, or on the other hand, making 1000 copies of Eliezer and torturing them all for 50 years.
The second possibility is much, much worse for you personally. For in the first possibility, you would subjectively have a 50% chance of being tortured. But in the second possibility, you would have a subjective chance of 99.9% of being tortured. This implies that the second possibility is much worse, so creating copies of bad experiences multiplies, even without diversity. But this implies that copies of good experiences should also multiply: if I make a million copies of Eliezer having billions of units of utility, this would be much better than making only one, which would give you only a 50% chance of experiencing this.
You shouldn’t waste your time figuring out how to act in an expanding multiverse, as opposed to a simple, single and unitary world. The problem of how to act and live even in the latter case is tough enough. Conditioning your choices on the former perspective is trying to think a god, when you’re in fact an animal.
I don’t like that reasoning. If you create an interesting person here, in our hubble volume, their interestingness can reflect back to you. The other “copies” 10^(10^50) or so light years away will never have anything to do with you.
I noticed you changed units between the average distance of another you and the average distance of another identical universe. That seems rather pointless. A lightyear is only 16 orders of magnitude larger than a meter, and is lost in rounding compared to 10^115 orders of magnitude.
You mentioned a portion of people. I don’t think there’s any reason to believe that the universe is this big but still finite, and if it is infinite, there’s no way to measure a fraction of people. There are infinity people who’s lives are worth living and infinity who’s lives are not. If you add it together, the result depends on what order you add them in. Dividing is similarly nonsensical. You can’t change the proportion of people who are happy because there is no proportion of people who are happy.
“But on the whole, it looks to me like if we decide to implement a policy of routinely killing off citizens to replace them with happier babies … We’re just setting up the universe to contain the same babies, born predominantly into regions where they lead short lifespans not containing much happiness.”
That would mean more happiness. Also, I don’t see the problem with short lifespans. My instinct is that you think consciousness ending is bad, but that happens every time you go to sleep, and I don’t see you complaining about that.
Are you attracted to quantum suicide to win the lottery then? (Put to one side for a moment the consequences for your friends, etc who would have to deal with your passing away)
How does quantum suicide increase the proportion of one’s future selves who are happy?
You could, for example, play the lottery and correlate your survival with winning…
As long as you don’t count the future selves who die in the other worlds in the denominator. It’s not clear to me that they shouldn’t count. Using that logic, though, you could just commit painless suicide anytime you’re slightly unhappy, and your only surviving selves would never be unhappy!
And what’s wrong with this idea?
Evolution gave us a strong instinct to not die, but evolution also gave us the false impression that our progression through time resembled a line rather than a tree, and that there’s only one planet earth. Knowing now that you are (the algorithm of) a tree, perhaps it is worth rethinking the dying=bad idea? Death, if used selectively, could mean a very happy (if less dense) tree.
If we live in a big world, this logic becomes very compelling. Who cares about killing 99% of yourself if you’re infinite anyway, and the upside is that you end up with an infinite amount of happiness rather than an infinite sad/happy mixture?
I can’t tell if you’re playing devil’s advocate or not… Surely you’ve heard of the categorical imperative and can predict the radical decrease in the happiness density of the universe if that was the reasoning employed by the all sapient beings.
To be precise, the argument would run that the universe will end up being dominated by beings that care more about their measure, and so there is a categorical imperative for happier beings to care more about their measure.
I’m not following. If all sapient beings applied this reasoning, only the most happy would decide not to die, and the happiness density would increase.
Wrote this and hit reload, but Kaj beat me to it.
I’m thinking most intelligences would kill themselves a lot in this scenario leading to a very empty universe for any particular one of them. The relevant density is “super happy entity per cubic parsec” not “super happy entity per total entities”.
Consider, right now, if all members of some religion killed themselves unless their miracles started coming true. From the perspective of almost all the measure of non-members of the religion, it would look like a simple suicide cult.
Or imagine the LHC really could create a black hole and destroy the earth. Everyone votes on a low probability positive event and we trigger the LHC if it doesn’t happen. From the perspective of the measure of almost all the aliens in the universe (if they exist) our sun has a black hole orbiting at 93 million miles.
If this sort of process was constantly happening among all intelligent species on all planets, we’d be in an empty universe (well, one with a lot of little blackholes anyway). The probability of running into other intelligent life “post anthropic principle” would be their practically non-existent measure times our practically non-existent measure.
Something I’ve actually wondered about is whether the first replicating molecule with the evolutionary potential to generate intelligent life was radically unlikely (requiring a feat of quantum chicanery), and that’s why the universe appears empty to us. I don’t know of anyone who published this first, but I assume someone beat me to it because it often seems to me that all thinkable thoughts have generally been generated by someone else decades or centuries ago :-P
Huh.
That’s the most interesting explanation for the Fermi paradox in a while. (Not exactly plausible, mind you, but an interesting idea nevertheless.)
I’ve read something like this here.
Sure, if everyone realized what a great idea quantum suicide was. But I think you can rest assured that that’s not going to happen. Assuming, that is, that it is actually a good idea…
Also I don’t govern my action with the categorical imperative. It works in some cases, but in general it is awful.
You have to assume that everyone will join in on this scheme, if you’re trying to argue in favor of it. If only a limited subset of people kill themselves when they’re unhappy, then that leaves a huge number of people mourning the (to them) meaningless death of their loved ones. You’d have to not only kill yourself, but also make sure that anyone who was hurt by your death died as well.
I was assuming that you were unconcerned with the sadness/mourning of those around you, or were prepared to make that tradeoff for some reason. (For example, egoism, or perhaps lack of friends/relations, or extreme need for the money)
Huh. Copenhagen interpretation of quantum mechanics isn’t pretty, but I’m not ready to die for it.
Do you have any pointer on why you believe so firmly in an infinite universe ? Reading books on physics (from mainstream authors like Stephen Hawking or Christian Magnan, or from less conventional books like Julian Barbour’s End of Time) I got the impression that the current consensus is that the universe is finite, expanding, but currently finite. There may be no limit of its size if, as it seems now, the expansion rate is growing—but right now it has a finite size.
And from a purely theoretical point of view, infinity doesn’t seem very coherent to me. Infinity doesn’t well, exist, it’s only a limit of a finite process. Saying “the universe is infinite” doesn’t mean much. Your reasoning seem like it is, to quote your own word, “assuming an infinity that has not been obtained as the limit of a finite calculation”, which is an illegal operation in maths.
Try this or this or this. Popular physics books are really bad about these things.
… But there’s no sense crying over every mistake, you just keep on trying till you run out of negentropy.
May I suggest ‘But there’s no sense crying over every inaccuracy / you just keep on trying till you use up your negentropy’? Rhymes and balances the syllable count.
Brb, writing rationalist hymn.
:P
I’m worried this is just an elaborate justification to not have as many children as possible. But I’m not convinced that I’m obligated to help all other ‘beings’, of any class or category, instead of merely not harming (most of) them.
I don’t think “infinite space” is enough to have infinite copies of me. You’d also need infinite matter, no?
[putting aside “many worlds” for a moment]