Multiverse-Wide Preference Utilitarianism
Summary
Some preference utilitarians care about satisfaction of preferences even when the organism with the preference doesn’t know that it has been satisfied. These preference utilitarians should care to some degree about the preferences that people in other branches of our multiverse have regarding our own world, as well as the preferences of aliens regarding our world. In general, this suggests that we should give relatively more weight to tastes and values that we expect to be more universal among civilizations across the multiverse. This consideration is strongest in the case of aesthetic preferences about inanimate objects and is weaker for preferences about organisms that themselves have experiences.
Introduction
Classical utilitarianism aims to maximize the balance of happiness over suffering for all organisms. Preference utilitarianism focuses on fulfillment vs. frustration of preferences, rather than just at hedonic experiences. So, for example, if someone has a preference for his house to go to his granddaughter after his death, then it would frustrate his preference if it instead went to his grandson, even though he wouldn’t be around to experience negative emotions due to his preference being thwarted.
Non-hedonic preferences
In practice, most of people’s preferences concern their own hedonic wellbeing. Some also concern the wellbeing of their children and friends, although often these preferences are manifested through direct happiness or suffering in oneself (e.g., being on the edge of your seat with anxiety when your 14-year-old daughter hasn’t come home by midnight).
However, some preferences are beyond hedonic experience by oneself. This is true of preferences about how the world will be after one dies, or whether the money you donated to that charity actually gets used well even if you wouldn’t find out either way. It’s true of many moral convictions. For instance, I want to actually reduce expected suffering rather than hook up to a machine that makes me think I reduced expected suffering and then blisses me out for the rest of my life. It’s also true of some aesthetic preferences, such as the view that it would be good for art, music, and knowledge to exist even if no one was around to experience them.
Certainly these non-hedonic preferences have hedonic effects. If I learned that I was going to be hooked up to a machine that would erase my moral convictions and bliss me out for the rest of my life, I would feel upset in the short run. However, almost certainly this aversive feeling would be outweighed by my pleasure and lack of suffering in the long run. So my preference conflicts with egoistic hedonism in this case. (My preference not to be blissed out is consistent with hedonistic utilitarianism, rather than hedonistic egoism, but hedonistic utilitarianism is a kind of moral system that exists outside the realm of hedonic preferences of an individual organism.)
Because preference utilitarians believe that preference violations can be harmful even if they aren’t accompanied by negative hedonic experience, there are some cases in which doing something that other people disapprove of is bad even if they never find out. For example, Muslims strongly oppose defacing the Quran. This means that, barring countervailing factors, it would be prima facie bad to deface a Quran in the privacy of your own home even if no one else knew about it.
Tyranny of the majority?
People sometimes object to utilitarianism on the grounds that it might allow for tyranny of the majority. This seems especially possible for preference utilitarianism, when considering preferences regarding the external world that don’t directly affect a person’s hedonic experience. For example, one might fear that if large numbers of people have a preference against gay sex, then even if these people are not emotionally affected by what goes on in the privacy of others’ bedrooms, their preference against those private acts might still matter appreciably.
As a preliminary comment, I should point out that preference utilitarianism typically optimizes idealized preferences rather than actual preferences. What’s important is not what you think you want but what you would actually want if you were better informed, had greater philosophical reflectiveness, etc. While there are strong ostensible preferences against gay sex in the world, it’s less clear that there are strong idealized preferences against it. It’s plausible that many gay opponents would come to see that (safe) gay sex is actually a positive expression of pleasure and love rather than something vile.
But let’s ignore this for the moment and suppose that most people really did have idealized preferences against gay sex. In fact, let’s suppose the world consists of N+2 people, two of whom are gay and would prefer to have sex with each other, and the other N of whom have idealized preferences opposing gay sex. If N is very large, do we have tyranny of the majority, according to which it’s bad for the two gays to have sex?
This is a complicated question that involves more subtlety than it may seem. Even if the direct preference summation came out against gay sex, it might still be better to allow it for other reasons. For instance, maybe at a meta level, a more libertarian stance on social issues tends to produce better outcomes in the long run. Maybe allowing gay sex increases people’s tolerance, leading to a more positive society in the future. And so on. But for now let’s consider just the direct preference summation: Does the balance of opposition to gay sex exceed the welfare of the gay individuals themselves?
This answer isn’t clear, and it depends how you weigh the different preferences. Intuitively it seems obvious that for large enough N, N people opposed to gay sex can trump two people who prefer it. On the other hand, that’s less clear if we look at the matter from the perspective of scaled utility functions.
Suppose unrealistically that the only thing the N anti-gay people care about is preventing gay sex. In particular, they’re expected-gay-sex minimizers, who consider each act of gay sex as bad as another and aim to minimize the total amount that happens. The best possible world (normalized utility = 1) is one where no gay sex happens. The worst possible world (normalized utility = 0) is one where all N+2 people have gay sex. The world where just the two gay people have gay sex is almost as good as the best possible world. In particular, its normalized utility is N/(N+2). Thus, if gay sex happens, each anti-gay person only loses 2/(N+2) utility. Aggregated over all N anti-gay people, this is a loss of 2N/(N+2).
Also unrealistically, suppose that the only thing the two gay people care about is having gay sex. Their normalized utility for having sex is 1 and for not having it is 0. Aggregated over the two of them, the total gain from having sex is 2.
Because 2 > 2N/(N+2), it’s overall better in direct preference summation for the gay sex to happen as long as we weight each person’s normalized utility equally. This is true regardless of N.
That said, if the anti-gay people had diminishing marginal disutility for additional acts of gay sex, this conclusion would probably flip around.
It feels intuitively suspicious to just sum normalized utility. As an example, consider a Beethoven utility monster—a person whose only goal in life is to hear Beethoven’s Ninth Symphony. This person has no other desires, and if he doesn’t hear Beethoven’s Ninth, it’s as good as being dead. Meanwhile, other people also want to hear Beethoven’s Ninth, but their desire for it is just a tiny fraction of what they care about. In particular, they value not dying and being able to live the rest of their lives 99,999 times as much as hearing Beethoven’s Ninth.
Each normal person’s normalized utility without hearing the symphony is 0.99999. Hearing the symphony would make it 1.00000.
The Beethoven utility monster would be at 0 without hearing the symphony and 1 hearing it.
Thus, if we directly sum normalized utilities, it’s better for the Beethoven utility monster to hear the symphony than for 99,999 regular people to do the same.
This seems suspicious. Maybe it’s because our intuitions are not well adapted to thinking about organisms with really different utility functions from ours, and if we interacted with them more—seeing them struggle endlessly, risking life and limb for the symphony they so desire—we would begin to feel differently. Another problem is that an organism’s utility counts for less as soon as the range of its experience increases. If the Beethoven monster were transformed to want to hear Beethoven’s Ninth and Eighth symphonies each with equal strength, suddenly the value of its hearing the Ninth alone is cut in half. Again, maybe this is plausible, but it’s not clear. I think some people have the intuition that an organism with a broader range of possible joys counts more than one with fewer, though I’m not sure I agree with this.
So the question of tyranny remains indeterminate. It depends on how you weigh different preferences. However, it remains the case that it may be instrumentally valuable to preserve norms of individual autonomy in order to produce better societies in the long run.
Preferences across worlds: A story of art maximizers
Consider the following (highly unrealistic) story. It’s the year 2100. A group of three artist couples is traveling on the first manned voyage to Mars. These couples value art for art’s sake, and in fact, their moral views consider art to be worthwhile even if no one experiences it. Their utility functions are linear in the amount of art that exists, and so they wish to maximize the expected amount of art in the galaxy—converting planets and asteroids into van Gogh, Shakespeare, and Chopin.
However, they don’t quite agree on which art is best. One couple wants to maximize paintings, feeling that a galaxy filled with paintings would be worth +3. A galaxy filled with sculptures would be +2. And a galaxy filled with poetry or music would be worthless: 0. The second couple values poetry at +3, sculptures at +2, and the other art at 0. The third values music at +3, sculptures at +2, and everything else at 0. Despite their divergent views, they manage to get along in the joint Martian voyage.
However, a few weeks into the trip, a terrestrial accident vaporizes Earth, leaving no one behind. The only humans are now the artists heading for Mars, where they land several months later.
The original plan had been for Earth to send more supplies following this crew, but now that Earth is gone, the colonists have only the minimal resources that the Martian base currently has in stock. They plan to grow more food in their greenhouse, but this will take many months, and the artists will all starve in the meanwhile if they each stick around. They realize that it would be best if two of the couples sacrificed themselves so that the third would have enough supplies to continue to grow crops and eventually repopulate the planet.
Rather than fighting for control of the Martian base, which could be costly and kill everyone, the three couples realize that everyone would be better off in expectation if they selected a winner by lottery. In particular, they use a quantum random number generator to apportion 1⁄3 probabilities for each couple to survive. The lottery takes place, and the winner is the first couple, which values paintings most highly. The other two couples wish the winning couple the best of luck and then head to the euthanasia pods.
The pro-paintings couple makes it through the period of low food and manages to establish a successful farming operation. They then begin having children to populate the planet. After many generations, Mars is home to a thriving miniature city. All the inhabitants value paintings at +3, sculptures at +2, and everything else at 0, due to the influence of the civilization’s founders.
By the year 2700, the city’s technology is sufficient to deploy von Neumann probes throughout the galaxy, converting planets into works of art. The city council convenes a meeting to decide exactly what kind of art should be deployed. Because everyone in the city prefers paintings, the council assumes the case will be open and shut. But as a formality, they invite their local philosopher, Dr. Muchos Mundos, to testify.
Council president: Dr. Mundos, the council has proposed to deploy von Neumann probes that will fill the galaxy with paintings. Do you agree with this decision?
Dr. Mundos: As I understand it, the council wishes to act in the optimal preference-utilitarian fashion on this question, right?
Council president: Yes, of course. The greatest good for the greatest number. Given that everyone who has any preferences about art most prefers a galaxy of paintings, we feel it’s clear that paintings are what we should deploy. It’s true that when this colony was founded, there were two other couples who would have wanted poetry and music, but their former preferences are far outweighed by our vast population that now wants paintings.
Dr. Mundos: I see. Are you familiar with the many-worlds interpretation (MWI) of quantum mechanics?
Council president: I’m a politician and not a physicist, but maybe you can give me the run-down?
Dr. Mundos: According to MWI, when quantum randomness occurs, it’s not the case that just a single outcome is selected. Rather, all outcomes happen, and our experiences of the world split into different branches.
Council president: Okay. What’s the relevance to art policy?
Dr. Mundos: Well, a quantum lottery was used to decide which colonizing couple would populate Mars. The painting lovers won in this branch of the multiverse, but the poetry lovers won in another branch with equal measure, and the music lovers won in a third branch, also with equal measure. Presumably the couples in those branches also populated Mars with a city about as populous as our own. And if they care about art for art’s sake, regardless of whether they know about it or where it exists, then the populations of those cities in other Everett branches also care about what art we deploy.
Council president: Oh dear, you’re right. Our city contains M people, and suppose their cities have about the same populations. If we deploy paintings, our M citizens each get +3 of utility, and those in the other worlds get nothing. The aggregate is 3M. But if we deploy sculptures, which everyone values at +2, the total utility is 3 * 2M = 6M. This is much better than 3M for paintings.
Dr. Mundos: Yes, exactly. Of course, we might have some uncertainty over whether the populations in the other branches survived. But even if the probability they survived was only, say, 1⁄3, then the expected utility of sculptures would still be 2M for us plus (1/3)(2M + 2M) = 4M/3 for them. The sum is more than 3M, so it would still be better to do sculptures.
After further deliberation, the council agreed with this argument and deployed sculptures. The preference satisfaction of the poetry-loving and music-loving cities was improved.
Multiversal distribution of preferences
According to Max Tegmark’s “Parallel Universes,” there’s probably an exact copy of you reading this article within 101028 meters away and in practice, probably much closer. As Tegmark explains, this claim assumes only basic physics that most cosmologists take for granted. Even nearer than this distance are many people very similar to you but with minor variations—e.g., with brown eyes instead of blue, or who prefer virtue ethics over deontology.
In fact, all possible people exist somewhere in the multiverse, if only due to random fluctuations of the type that produce Boltzmann brains. Nick Bostrom calls these “freak observers.” Just as there are art maximizers, there are also art minimizers who find art disgusting and want to eliminate as much of it as possible. For them, the thought of art triggers their brains’ disgust centers instead of beauty centers.
However, the distribution of organisms across the multiverse is not uniform. For instance, we should expect suffering reducers to be much more common than suffering increasers because organisms evolve to dislike suffering by themselves, their kin, and their reciprocal trading partners. Societies—whether human or alien—should often develop norms against cruelty for collective benefit.
Human values give us some hints about what values across the multiverse look like, because human values are a kind of maximum likelihood estimator for the mode of the multiversal distribution. Of course, we should expect some variation about the mode. Even among humans, some cultural norms are distinct and others are universal. Probably values like not murdering, not causing unnecessary suffering, not stealing, etc. are more common among aliens than, say, the value of music or dance, which might be human-specific spandrels. Still, aliens may have their own spandrels that they call “art,” and they might value those things.
Like human values, alien values might be mostly self-directed toward their own wellbeing, especially in their earlier Darwinian phases. Unless we meet the aliens face-to-face, we can’t improve their welfare directly. However, the aliens may also have some outward-directed aesthetic and moral values that apply across space and time, like the value of art as seen by the art-maximizing cities on Mars in the previous section. If so, we can affect the satisfaction of these preferences by our actions, and presumably they should be included in preference-utilitarian calculations.
As an example, suppose there were 10 civilizations. All 10 valued reducing suffering and social equality. 5 of the 10 also valued generating knowledge. Only 1 of the 10 valued creating paintings and poetry. Suppose our civilization values all of those things. Perhaps previously we were going to spend money on creating more poetry, because our citizens value that highly. However, upon considering that poetry would not satisfy the preferences of the other civilizations, we might switch more toward knowledge and especially toward suffering reduction and equality promotion.
In general, considering the distribution of outward-directed preferences across the multiverse should lead us to favor more those preferences of ours that are more evolutionarily robust, i.e., that we predict more civilizations to have settled upon. One corollary is that we should care less about values that we have due to particular, idiosyncratic historical contingencies, such as who happened to win some very closely contested war, or what species were killed by a random asteroid strike. Values based on more inevitable historical trends should matter relatively more strongly.
Tyranny of the aliens?
Suppose, conservatively, that for every one human civilization, there are 1000 alien civilizations that have some outward-directed preferences (e.g., for more suffering reduction, justice, knowledge, etc.). Even if each alien civilization cares only a little bit about what we do, collectively do their preferences outweigh our preferences about our own destiny? Would we find ourselves beholden to the tyranny of the alien majority about our behavior?
This question runs exactly parallel to the standard concern about tyranny of the majority for individuals within a society, so the same sorts of arguments will apply on each side. Just as in that case, it’s possible aliens would place value on the ability of individual civilizations to make their own choices about how they’re constituted without too much outside interference. Of course, this is just speculation.
Even if tyranny of the alien majority was the result, we might choose to accept that conclusion. After all, it seems to yield more total preference satisfaction, which is what the preference utilitarians were aiming for.
Direct welfare may often dominate
In the preceding examples, I often focused on aesthetic values like art and knowledge for a specific reason: These are cases of preferences for something to exist or not where that thing does not itself have preferences. Art does not prefer for itself to keep existing or stop existing.
However, many human preferences have implications for the preferences of others. For instance, a preference by humans for more wilderness may mean vast numbers of additional wild animals, many of whom strongly (implicitly) prefer not to have endured the short lives and painful deaths inherent to the bodies in which they found themselves born. A relatively weak aesthetic preference for nature by a relatively small number of people is compared against strong hedonic preferences by large numbers of animals not to have existed. In this case, the preferences of the animals clearly dominate. The same is true for preferences about creating space colonies and the like: The preferences of the people, animals, and other agents in those colonies will tend to far outweigh the preferences of their creators.
Considering multiverse-wide aesthetic and moral preferences is thus cleanest in the case of preferences about inanimate things. Aliens’ preferences about actions that affect the welfare of organisms in our civilization still matter, but relatively less than the contribution of their preferences about inanimate things.
Acknowledgments
This piece was inspired by Carl Shulman’s “Rawls’ original position, potential people, and Pascal’s Mugging,” as well as a conversation with Paul Christiano.
A question about preference utilitarianism in general: what question is it trying to answer? It’s common to divide the question “What should I do?” into two parts:
What should I value/prefer?
How can I best maximize my values or satisfy my preferences?
So for example ethical theories like hedonic egoism or hedonic utilitarianism attempt to answer 1, while decision theories like CDT and UDT attempt to answer 2. Is preference utilitarianism trying to answer question 1 or question 2? Or something else outside of this framework?
Preference utilitarianism is usually contrasted with hedonic utilitarianism, which suggests it might be trying to answer question 1. But I can’t make sense of that, because if I’m supposed to prefer that everyone else satisfy their preferences, and presumably by symmetry they are also supposed to prefer that everyone else satisfy their preferences, that seems to cause an infinite recursion and nobody ends up having any concrete preferences at all.
So is it trying to answer question 2? Do preference utilitarians argue that trying to maximize an average of everyone’s utility is the best way to satisfy one’s own selfish or idiosyncratic preferences? (I think Gary Drescher has given an argument like this in his book, but what’s the mainstream view among preference utilitarians?) I note that if this is the case, then preference utilitarianism is perfectly compatible with hedonic egoism or hedonic utilitarianism, but I don’t seem to recall this point being made in any articles I’ve read about preference utilitarianism.
Am I the only one who doesn’t find this suspicious at all? After all, the Beethoven utility monster would gain 100,000 times as much fulfillment from the symphony as the normal people; it makes intuitive sense to me that it would be unfair to deny the BUM the opportunity to hear Beethoven’s ninth just so that, say, 100 normal people could hear it. After all, those people wouldn’t be that much worse off not having heard the symphony, which the BUM would rather die than not hear.
Obviously this intuition breaks down in a lot of similar thought experiments (should we let the BUM run over pedestrians in the road on its way to Carnegie Hall? etc.) but if the goal is to show that summing normalized utility can give undesirable or unintuitive results, that particular thought experiment isn’t really ideal.
An agent’s revealed preferences are distinct from the agent’s feelings of desire. The Beethoven monster can be seen to risk its own life to hear Beethoven, but that doesn’t mean it has a strong feeling of desire. The data could just as well be explained by a lack of strong desire to keep living. Or the agent could lack any emotions we would call “desire” or “desperation”. In the latter two cases, the argument doesn’t seem clear to me.
Me either. Basically other people in the example do not want to hear Beethoven all that much, they have other priorities.
Not if their utility of being run over is close to 0 …
The BUM would have a pretty high bar of evidence to meet to prove that running over one pedestrian was really necessary to reach the only B9 performance it could ever get to. By which I mean, it won’t be able to establish that.
So, no.
So when we get the computational power, we should do lots of simulations of evolution to see what kind of preferences evolution tends to generate. And if other people are doing this, it increases the chances of us currently being in a computer simulation.
This gave me an idea to make things even more complicated: Let’s assume a scientist manages to create a simulated civilization of the same size as his own. It turns out, that to keep the civilization running he will have to sacrifice a lot. All members of the simulated civilization prefer to continue existing while the “mother civilization” prefers to sacrifice as little as possible.
How much should be sacrificed to keep the simulation running as long as possible? Should the simulated civilization create simulations itself to increase the preference of continued existence?
Bonus questions: Does a simulated Civilization get to prefer anything? What are the moral implications of creating new beings that may hold preference (including having children in real life)? What if the scientist can manipulate the preferences of the simulated civilization, should he? And to what end? What about education and other preference changing techniques in real life?
I have to say it’s fun to find the most extreme scenario to doom our civilization by critical mass of preference. Can you find a more extreme or more realistic one than my civilization simulating supercomputer or the aliens mentioned in the original post?
I haven’t read the entire post, but a few problems would emerge besides your counterintuitive simulation point.
1) Evolution is more likely to create the Basic AI drives of Omohundro, and it doesn’t seem that it would be ethically desirable to maximize Basic AI drives for a higher total sum of preferential utilons in the universe. So trying to MaxipEvo (Analogous to the Maxipok anti x-risk principle, where you Max the probability that the more “evolvable” values take over) will decrease the value of rarity, uniqueness (per-universe uniqueness), difference etc...
2) Lots about whether a preference is common or not depends if you qualify it in a fine grained way or in a coarse grained way. Maybe most civilizations care about art. But nearly none cares about the sort of pointy architecture that emerged in the Islamic world. If you qualify it is as art, preferences are being satisfied on Far Venus. If you call it Islamic pointy things, no one in Far Venus cares.
Nature seems to find some attractors in design space, which become aesthetically pleasing. Symmetry has created on earth a limited amount of types. bilateral, trilateral, quadrilateral, pentagonal, hexagonal and radial (maybe a few more). , sexual selection on the other hand created things as different as the peackock’s tail, the birds of paradise dance, moonwalking, etc…
So it depends a lot on which categories are you carving up to make your decisions. And, by extension, which categories you expect them (aliens, Far Venusians) to be categorizing stuff into.
Thanks. :)
1) I’m not suggesting MaxipEvo. I’m suggesting maximizing whatever preferences are out there, including paperclip preferences if those are held by some civilizations. It’s just that many preferences may have their roots in biological evolution, unless goal preservation is really hard.
Humans have weak forms of the basic AI drives, although many of the things we value about ourselves are spandrels that don’t help with winning power in the modern world. I don’t see why it should differ substantially for aliens. If you mean that a nontrivial fraction of agents that colonize their galaxies are paperclippers with values divorced from those of their evolved progeny, then we would just care about the preferences of those paperclippers. Preferences often mutate in new directions. The preferences of the little mammals from which you came are not the same as your preferences. Your preferences now are not the same as they were when you were 2 months old. It’s not clear why we should regard paperclipping as less of a legitimate preference than some other quirk of evolution on an alien planet.
2) Yes, the content needs to be fine-grained. For instance, in the story, we saw that some people liked paintings, others liked music, others liked poetry, etc. Within those categories you could have further distinctions. That said, if we look at humans, a lot of people value most kinds of art, including those pointy Islamic buildings. I suspect many humans would even value alien art. Think about it: If people got a glimpse of Venusian spandrels of visual construction, they would be awed and think it was really cool.
In any event, I agree that art may be more parochial than things like not murdering, not causing needless suffering, and so on. Still, beings may have a preference that aliens “enjoy whatever spandrels they have,” i.e., the preference for art might be broader than it seems.
Though I enjoyed your commentary, I think I have failed on two accounts. First, I was not clear enough about MaxipEvo.
By MaxipEvo I mean the values that are likely to arise in any evolved system, regardless of its peculiarities like oxigen density, planet size and radiation intake. Things like what Nietzsche would call “will to power”, economists would call “homo economicus” and naive biologists would call “selfish individuals”.
These are universal, as is symmetry. Anything that evolves would benefit from symmetry and from wanting to capture more resources.
Now let’s do the math here: If the entities outside our Hubble volume outnumber the entities inside it by near infinity to one, or infinity to one, then even a small preference they have about our world should be more important than strong preferences of ours. So if anything is in the intersection of “commonly evolvable in any complex system with valuable beings” and “whose intentionality is about something in our tiny corner of the Cosmos” then this should be a major priority for us. This would lead us to praise will to power, selfishness and symmetry.
I consider this to be a reductio ad absurdum, in that if those are the values we ought to preserve according to a line of reasoning, the line of reasoning is wrong.
The main paper to keep in mind here is “The Future of Human Evolution” for me the hallmark of Bostrom’s brilliance. One of the points he makes is that display, and flamboyant display (of the kind Robin Hanson frequently makes fun of) are both what most matters to us. Dance, ritual, culture, sui generis, personality, uniqueness etc…
If any ethical argument makes a case against these, and pro things that evolution carves into any self-replicating system with a brain, this argument seems to be flawed in my view.
This is what I mentioned in the “Tyranny of the aliens?” section. However, it’s not clear that human-style values are that rare. We should expect ourselves to be in a typical civilization, and certain “ethical” principles like not killing others, not causing needless suffering, reciprocal altruism, etc. should tend to emerge repeatedly. The fact that it happened on Earth seems to suggest the odds are not 1/infinity of it happening in general.
Very particular spandrels like dance and personality quirks are more rare, yes. But regarding the conclusion that these matter less than we thought, one man’s modus tollens is another’s modus ponens. After all, wouldn’t we prefer it if aliens valued what we cared about rather than being purely selfish to their own idiosyncrasies?
In any case, maybe it’s common for civilizations to value letting other civilizations do what they value. The situation is not really different from that of an individual within society from a utilitarian standpoint. We let people do their own weird artwork or creative endeavors even if nobody else cares.
The second account on which I was not clear is how much my points about fine-grainedness are related to Yuskowsky’s reflections about “reference class tennis”. There is arbitraryness in defining classes. And if you carve classes differently to find out which classes people care about, you find yourself with arbitrary options. When I care about the art in Far Venus, I’m not sure I care about “any art” “any art resembling ours” “any art not resembling hip hop and gore” or “only things that are isomorphic to symphonies”
Likewise I don’t know this about them, Venusians, and this makes a big difference on whether I should create more generic forms of art here, or more fine grained ones.
I wouldn’t use reference classes at all. I’d just ask, “How many other civilizations care about this particular proposed piece of artwork?” I personally don’t care intrinsically about art, but if you asked an art enthusiast, I bet she would say “I care about Venusian masterpiece X” for lots of specific values of X that they might create.
The idea of caring specifically about human quirks rather than alien quirks seems akin to ethnocentrism, though I can see the concern about becoming too broad such that what you care about becomes diluted. I expect people’s horizons will become more cosmopolitan on this question over time, just as has been the historical trend. The march of multiculturalism may one day go intergalactic.
Think about multiverse utilitarianism as a normative force. If it is to be taken seriously, it’s main consequence will be making things more normal. More evolvable. Less peculiar and unique.
I don’t mind human quirks in particular that much (including art) when I’m wearing the “thinking about multiverses hat”. My point is that an ethical MultiWorld should be such that when we value the difference between Burning Man, Buddhist Funerals, Godel’s theorems and Axl Rose’s temper are valued in their difference. What matters about those artifacts of cultural crafstsmanship is not that which a bumblebee or an equidna might have created (will to power, hunger, eagerness to reproduce, symmetry) What matters involves the difference itself. One of the things that make those things awesome is their divergence.
If Far Venus has equivalent diversity, I’m happy for them. I don’t want to value what they share with us (being constrained by some physics, by logic, by evolution, and by the sine qua non conditions for intelligent life, whichever they are).
Ah, I see. The value of diversity is plausibly convergent, because most organisms will need boredom and novelty-seeking. If many civilizations value diversity, they would each be happy to let each other do their own diverse artwork. So this “force” might not lead to homogenization.
I had to upvote this essay just for the bizarreness, especially with the whole Mars example.
Downvoted for the same reasons. Also, as a mater of course, I downvote any argument which critically relies on untestables, like MWI and tegmarkery.
I think there’s a difference between A) making an argument for something that relies on untestables, and B) assuming some untestables and seeing what they would imply if true. I read this post more as the latter, though you could read it as the former as well.
Also, while MWI and the mathematical multiverse hypothesis seem untestable, I thought the lower levels of Tegmark’s hierarchy were testable? I remember Tegmark citing various kinds of empirical evidence in favor of his levels I and II, and also seem to recall hearing of later experiments that would have been evidence against level II (but I know very little physics, so I might have misunderstood those experiments).
The OP does not say “let’s assume that...”, but rather references the multiverse offhandedly, like it’s a fact, without even specifying which of the many multiverse models it uses.
As for Tegmark I and II, it is not clear whether the chaotic inflation model will be testable some day. Certainly our current understanding of quantum gravity is woefully insufficient to even make an educated guess as to whether and when and how it can happen. By then I expect that even the multi-level picture itself will be considered naive and outdated.
You still have tyranny of the majority in the gay example, if the gays are slightly less than their maximum possible horny or would slightly prefer to try sex aboard the international space station.
Furthermore, you have an inconsistency if you have “a box with 2 people in it who happen to agree”. If you treat it as two people, they have an influence of 2, if you treat it as one object, it has an influence of 1 . The brain is a spatially extended thing, you know, with two hemispheres which can be any degree of connected.
Good article, thanks! I especially appreciated the “story”. Just some feedback, I would have benefited from a conclusion paragraph summarizing the verdicts.
“In general, this suggests that we should give relatively more weight to tastes and values that we expect to be more universal among civilizations across the multiverse.”
This is a pretty interesting idea to me, Brian. It makes intuitive sense but when would we apply it? Can it only be used as a tiebreaker? It’s difficult for me to imagine scenarios where this consideration would sway my decision.