So when we get the computational power, we should do lots of simulations of evolution to see what kind of preferences evolution tends to generate. And if other people are doing this, it increases the chances of us currently being in a computer simulation.
This gave me an idea to make things even more complicated:
Let’s assume a scientist manages to create a simulated civilization of the same size as his own. It turns out, that to keep the civilization running he will have to sacrifice a lot. All members of the simulated civilization prefer to continue existing while the “mother civilization” prefers to sacrifice as little as possible.
How much should be sacrificed to keep the simulation running as long as possible?
Should the simulated civilization create simulations itself to increase the preference of continued existence?
Bonus questions:
Does a simulated Civilization get to prefer anything?
What are the moral implications of creating new beings that may hold preference (including having children in real life)?
What if the scientist can manipulate the preferences of the simulated civilization, should he? And to what end?
What about education and other preference changing techniques in real life?
I have to say it’s fun to find the most extreme scenario to doom our civilization by critical mass of preference. Can you find a more extreme or more realistic one than my civilization simulating supercomputer or the aliens mentioned in the original post?
I haven’t read the entire post, but a few problems would emerge besides your counterintuitive simulation point.
1) Evolution is more likely to create the Basic AI drives of Omohundro, and it doesn’t seem that it would be ethically desirable to maximize Basic AI drives for a higher total sum of preferential utilons in the universe. So trying to MaxipEvo (Analogous to the Maxipok anti x-risk principle, where you Max the probability that the more “evolvable” values take over) will decrease the value of rarity, uniqueness (per-universe uniqueness), difference etc...
2) Lots about whether a preference is common or not depends if you qualify it in a fine grained way or in a coarse grained way.
Maybe most civilizations care about art. But nearly none cares about the sort of pointy architecture that emerged in the Islamic world. If you qualify it is as art, preferences are being satisfied on Far Venus. If you call it Islamic pointy things, no one in Far Venus cares.
Nature seems to find some attractors in design space, which become aesthetically pleasing. Symmetry has created on earth a limited amount of types. bilateral, trilateral, quadrilateral, pentagonal, hexagonal and radial (maybe a few more). , sexual selection on the other hand created things as different as the peackock’s tail, the birds of paradise dance, moonwalking, etc…
So it depends a lot on which categories are you carving up to make your decisions. And, by extension, which categories you expect them (aliens, Far Venusians) to be categorizing stuff into.
1) I’m not suggesting MaxipEvo. I’m suggesting maximizing whatever preferences are out there, including paperclip preferences if those are held by some civilizations. It’s just that many preferences may have their roots in biological evolution, unless goal preservation is really hard.
Humans have weak forms of the basic AI drives, although many of the things we value about ourselves are spandrels that don’t help with winning power in the modern world. I don’t see why it should differ substantially for aliens. If you mean that a nontrivial fraction of agents that colonize their galaxies are paperclippers with values divorced from those of their evolved progeny, then we would just care about the preferences of those paperclippers. Preferences often mutate in new directions. The preferences of the little mammals from which you came are not the same as your preferences. Your preferences now are not the same as they were when you were 2 months old. It’s not clear why we should regard paperclipping as less of a legitimate preference than some other quirk of evolution on an alien planet.
2) Yes, the content needs to be fine-grained. For instance, in the story, we saw that some people liked paintings, others liked music, others liked poetry, etc. Within those categories you could have further distinctions. That said, if we look at humans, a lot of people value most kinds of art, including those pointy Islamic buildings. I suspect many humans would even value alien art. Think about it: If people got a glimpse of Venusian spandrels of visual construction, they would be awed and think it was really cool.
In any event, I agree that art may be more parochial than things like not murdering, not causing needless suffering, and so on. Still, beings may have a preference that aliens “enjoy whatever spandrels they have,” i.e., the preference for art might be broader than it seems.
Though I enjoyed your commentary, I think I have failed on two accounts. First, I was not clear enough about MaxipEvo.
By MaxipEvo I mean the values that are likely to arise in any evolved system, regardless of its peculiarities like oxigen density, planet size and radiation intake. Things like what Nietzsche would call “will to power”, economists would call “homo economicus” and naive biologists would call “selfish individuals”. These are universal, as is symmetry. Anything that evolves would benefit from symmetry and from wanting to capture more resources.
Now let’s do the math here: If the entities outside our Hubble volume outnumber the entities inside it by near infinity to one, or infinity to one, then even a small preference they have about our world should be more important than strong preferences of ours. So if anything is in the intersection of “commonly evolvable in any complex system with valuable beings” and “whose intentionality is about something in our tiny corner of the Cosmos” then this should be a major priority for us. This would lead us to praise will to power, selfishness and symmetry.
I consider this to be a reductio ad absurdum, in that if those are the values we ought to preserve according to a line of reasoning, the line of reasoning is wrong.
The main paper to keep in mind here is “The Future of Human Evolution” for me the hallmark of Bostrom’s brilliance. One of the points he makes is that display, and flamboyant display (of the kind Robin Hanson frequently makes fun of) are both what most matters to us. Dance, ritual, culture, sui generis, personality, uniqueness etc…
If any ethical argument makes a case against these, and pro things that evolution carves into any self-replicating system with a brain, this argument seems to be flawed in my view.
If the entities outside our Hubble volume outnumber the entities inside it by near infinity to one, or infinity to one, then even a small preference they have about our world should be more important than strong preferences of ours.
This is what I mentioned in the “Tyranny of the aliens?” section. However, it’s not clear that human-style values are that rare. We should expect ourselves to be in a typical civilization, and certain “ethical” principles like not killing others, not causing needless suffering, reciprocal altruism, etc. should tend to emerge repeatedly. The fact that it happened on Earth seems to suggest the odds are not 1/infinity of it happening in general.
Very particular spandrels like dance and personality quirks are more rare, yes. But regarding the conclusion that these matter less than we thought, one man’s modus tollens is another’s modus ponens. After all, wouldn’t we prefer it if aliens valued what we cared about rather than being purely selfish to their own idiosyncrasies?
In any case, maybe it’s common for civilizations to value letting other civilizations do what they value. The situation is not really different from that of an individual within society from a utilitarian standpoint. We let people do their own weird artwork or creative endeavors even if nobody else cares.
The second account on which I was not clear is how much my points about fine-grainedness are related to Yuskowsky’s reflections about “reference class tennis”. There is arbitraryness in defining classes. And if you carve classes differently to find out which classes people care about, you find yourself with arbitrary options. When I care about the art in Far Venus, I’m not sure I care about “any art” “any art resembling ours” “any art not resembling hip hop and gore” or “only things that are isomorphic to symphonies”
Likewise I don’t know this about them, Venusians, and this makes a big difference on whether I should create more generic forms of art here, or more fine grained ones.
I wouldn’t use reference classes at all. I’d just ask, “How many other civilizations care about this particular proposed piece of artwork?” I personally don’t care intrinsically about art, but if you asked an art enthusiast, I bet she would say “I care about Venusian masterpiece X” for lots of specific values of X that they might create.
The idea of caring specifically about human quirks rather than alien quirks seems akin to ethnocentrism, though I can see the concern about becoming too broad such that what you care about becomes diluted. I expect people’s horizons will become more cosmopolitan on this question over time, just as has been the historical trend. The march of multiculturalism may one day go intergalactic.
Think about multiverse utilitarianism as a normative force. If it is to be taken seriously, it’s main consequence will be making things more normal. More evolvable. Less peculiar and unique.
I don’t mind human quirks in particular that much (including art) when I’m wearing the “thinking about multiverses hat”. My point is that an ethical MultiWorld should be such that when we value the difference between Burning Man, Buddhist Funerals, Godel’s theorems and Axl Rose’s temper are valued in their difference. What matters about those artifacts of cultural crafstsmanship is not that which a bumblebee or an equidna might have created (will to power, hunger, eagerness to reproduce, symmetry) What matters involves the difference itself.
One of the things that make those things awesome is their divergence.
If Far Venus has equivalent diversity, I’m happy for them. I don’t want to value what they share with us (being constrained by some physics, by logic, by evolution, and by the sine qua non conditions for intelligent life, whichever they are).
Ah, I see. The value of diversity is plausibly convergent, because most organisms will need boredom and novelty-seeking. If many civilizations value diversity, they would each be happy to let each other do their own diverse artwork. So this “force” might not lead to homogenization.
So when we get the computational power, we should do lots of simulations of evolution to see what kind of preferences evolution tends to generate. And if other people are doing this, it increases the chances of us currently being in a computer simulation.
This gave me an idea to make things even more complicated: Let’s assume a scientist manages to create a simulated civilization of the same size as his own. It turns out, that to keep the civilization running he will have to sacrifice a lot. All members of the simulated civilization prefer to continue existing while the “mother civilization” prefers to sacrifice as little as possible.
How much should be sacrificed to keep the simulation running as long as possible? Should the simulated civilization create simulations itself to increase the preference of continued existence?
Bonus questions: Does a simulated Civilization get to prefer anything? What are the moral implications of creating new beings that may hold preference (including having children in real life)? What if the scientist can manipulate the preferences of the simulated civilization, should he? And to what end? What about education and other preference changing techniques in real life?
I have to say it’s fun to find the most extreme scenario to doom our civilization by critical mass of preference. Can you find a more extreme or more realistic one than my civilization simulating supercomputer or the aliens mentioned in the original post?
I haven’t read the entire post, but a few problems would emerge besides your counterintuitive simulation point.
1) Evolution is more likely to create the Basic AI drives of Omohundro, and it doesn’t seem that it would be ethically desirable to maximize Basic AI drives for a higher total sum of preferential utilons in the universe. So trying to MaxipEvo (Analogous to the Maxipok anti x-risk principle, where you Max the probability that the more “evolvable” values take over) will decrease the value of rarity, uniqueness (per-universe uniqueness), difference etc...
2) Lots about whether a preference is common or not depends if you qualify it in a fine grained way or in a coarse grained way. Maybe most civilizations care about art. But nearly none cares about the sort of pointy architecture that emerged in the Islamic world. If you qualify it is as art, preferences are being satisfied on Far Venus. If you call it Islamic pointy things, no one in Far Venus cares.
Nature seems to find some attractors in design space, which become aesthetically pleasing. Symmetry has created on earth a limited amount of types. bilateral, trilateral, quadrilateral, pentagonal, hexagonal and radial (maybe a few more). , sexual selection on the other hand created things as different as the peackock’s tail, the birds of paradise dance, moonwalking, etc…
So it depends a lot on which categories are you carving up to make your decisions. And, by extension, which categories you expect them (aliens, Far Venusians) to be categorizing stuff into.
Thanks. :)
1) I’m not suggesting MaxipEvo. I’m suggesting maximizing whatever preferences are out there, including paperclip preferences if those are held by some civilizations. It’s just that many preferences may have their roots in biological evolution, unless goal preservation is really hard.
Humans have weak forms of the basic AI drives, although many of the things we value about ourselves are spandrels that don’t help with winning power in the modern world. I don’t see why it should differ substantially for aliens. If you mean that a nontrivial fraction of agents that colonize their galaxies are paperclippers with values divorced from those of their evolved progeny, then we would just care about the preferences of those paperclippers. Preferences often mutate in new directions. The preferences of the little mammals from which you came are not the same as your preferences. Your preferences now are not the same as they were when you were 2 months old. It’s not clear why we should regard paperclipping as less of a legitimate preference than some other quirk of evolution on an alien planet.
2) Yes, the content needs to be fine-grained. For instance, in the story, we saw that some people liked paintings, others liked music, others liked poetry, etc. Within those categories you could have further distinctions. That said, if we look at humans, a lot of people value most kinds of art, including those pointy Islamic buildings. I suspect many humans would even value alien art. Think about it: If people got a glimpse of Venusian spandrels of visual construction, they would be awed and think it was really cool.
In any event, I agree that art may be more parochial than things like not murdering, not causing needless suffering, and so on. Still, beings may have a preference that aliens “enjoy whatever spandrels they have,” i.e., the preference for art might be broader than it seems.
Though I enjoyed your commentary, I think I have failed on two accounts. First, I was not clear enough about MaxipEvo.
By MaxipEvo I mean the values that are likely to arise in any evolved system, regardless of its peculiarities like oxigen density, planet size and radiation intake. Things like what Nietzsche would call “will to power”, economists would call “homo economicus” and naive biologists would call “selfish individuals”.
These are universal, as is symmetry. Anything that evolves would benefit from symmetry and from wanting to capture more resources.
Now let’s do the math here: If the entities outside our Hubble volume outnumber the entities inside it by near infinity to one, or infinity to one, then even a small preference they have about our world should be more important than strong preferences of ours. So if anything is in the intersection of “commonly evolvable in any complex system with valuable beings” and “whose intentionality is about something in our tiny corner of the Cosmos” then this should be a major priority for us. This would lead us to praise will to power, selfishness and symmetry.
I consider this to be a reductio ad absurdum, in that if those are the values we ought to preserve according to a line of reasoning, the line of reasoning is wrong.
The main paper to keep in mind here is “The Future of Human Evolution” for me the hallmark of Bostrom’s brilliance. One of the points he makes is that display, and flamboyant display (of the kind Robin Hanson frequently makes fun of) are both what most matters to us. Dance, ritual, culture, sui generis, personality, uniqueness etc…
If any ethical argument makes a case against these, and pro things that evolution carves into any self-replicating system with a brain, this argument seems to be flawed in my view.
This is what I mentioned in the “Tyranny of the aliens?” section. However, it’s not clear that human-style values are that rare. We should expect ourselves to be in a typical civilization, and certain “ethical” principles like not killing others, not causing needless suffering, reciprocal altruism, etc. should tend to emerge repeatedly. The fact that it happened on Earth seems to suggest the odds are not 1/infinity of it happening in general.
Very particular spandrels like dance and personality quirks are more rare, yes. But regarding the conclusion that these matter less than we thought, one man’s modus tollens is another’s modus ponens. After all, wouldn’t we prefer it if aliens valued what we cared about rather than being purely selfish to their own idiosyncrasies?
In any case, maybe it’s common for civilizations to value letting other civilizations do what they value. The situation is not really different from that of an individual within society from a utilitarian standpoint. We let people do their own weird artwork or creative endeavors even if nobody else cares.
The second account on which I was not clear is how much my points about fine-grainedness are related to Yuskowsky’s reflections about “reference class tennis”. There is arbitraryness in defining classes. And if you carve classes differently to find out which classes people care about, you find yourself with arbitrary options. When I care about the art in Far Venus, I’m not sure I care about “any art” “any art resembling ours” “any art not resembling hip hop and gore” or “only things that are isomorphic to symphonies”
Likewise I don’t know this about them, Venusians, and this makes a big difference on whether I should create more generic forms of art here, or more fine grained ones.
I wouldn’t use reference classes at all. I’d just ask, “How many other civilizations care about this particular proposed piece of artwork?” I personally don’t care intrinsically about art, but if you asked an art enthusiast, I bet she would say “I care about Venusian masterpiece X” for lots of specific values of X that they might create.
The idea of caring specifically about human quirks rather than alien quirks seems akin to ethnocentrism, though I can see the concern about becoming too broad such that what you care about becomes diluted. I expect people’s horizons will become more cosmopolitan on this question over time, just as has been the historical trend. The march of multiculturalism may one day go intergalactic.
Think about multiverse utilitarianism as a normative force. If it is to be taken seriously, it’s main consequence will be making things more normal. More evolvable. Less peculiar and unique.
I don’t mind human quirks in particular that much (including art) when I’m wearing the “thinking about multiverses hat”. My point is that an ethical MultiWorld should be such that when we value the difference between Burning Man, Buddhist Funerals, Godel’s theorems and Axl Rose’s temper are valued in their difference. What matters about those artifacts of cultural crafstsmanship is not that which a bumblebee or an equidna might have created (will to power, hunger, eagerness to reproduce, symmetry) What matters involves the difference itself. One of the things that make those things awesome is their divergence.
If Far Venus has equivalent diversity, I’m happy for them. I don’t want to value what they share with us (being constrained by some physics, by logic, by evolution, and by the sine qua non conditions for intelligent life, whichever they are).
Ah, I see. The value of diversity is plausibly convergent, because most organisms will need boredom and novelty-seeking. If many civilizations value diversity, they would each be happy to let each other do their own diverse artwork. So this “force” might not lead to homogenization.