Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game—anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined “more equal” society?
I wonder if it makes sense to model a separate variable in the global utility function for “culture.” In other words, I think the value I place on a hypothetical society runs something like
)%20+%20log(U_c))., where x is each individual person’s individual utility, and c is the overall cultural level.
A society where a million people each enjoy reading the Lord of the Rings but there are no other books would have high sigma[U(x)] and low U(c); a society where a hundred people each enjoy reading a unique book would have low total U(x) but high U(c).
That would help model the intuition that culture, even in the abstract, is worth trading off against individual happiness. I think I would prefer a Universe in which the Lord of the Rings were encoded into a durable piece of stone but otherwise had nothing else in it to a Universe in which there was a thriving colony of a few hundred cells of plankton but otherwise nothing else in it, even if there were nobody around to read the stone. Many economists would call that irrational—but like the OP, I reject the premise that my individual utility function for the state of the world has to break down into other people’s individual welfare.
I’ll accept the intuition, but culture seems even harder to quantify than individual welfare—and the latter isn’t exactly easy. I’m not sure what we should be summing over even in principle to arrive at a function for cultural utility, and I’m definitely not sure if it’s separable from individual welfare.
One approach might be to treat cultural artifacts as fractions of identity, an encoding of their creators’ thoughts waiting to be run on new hardware. Individually they’d probably have to be considered subsapient (it’s hard to imagine any transformation that could produce a thinking being when applied to Lord of the Rings), but they do have the unique quality of being transmissible. That seems to imply a complicated value function based partly on population: a populous world containing Lord of the Rings without its author is probably enriched more than one containing a counterfactual J.R.R. Tolkien that never published a word. I’m not convinced that this added value need be positive, either: consider a world containing one of H.P. Lovecraft’s imagined pieces of sanity-destroying literature. Or your own least favorite piece of real-life media, if you’re feeling cheeky.
How about a universe with one planet full of inaminate cultural artifacts of “great artistic value”, and, on another planet that’s forever unreachable, a few creatures in extreme suffering? If you make the cultural value on the artifact planet high enough, it would seem to justify the suffering on the other planet, and you’d then have to prefer this to an empty universe, or one with insentient plankton. But isn’t that absurd? Why should creatures suffer lives not worth living just because somewhere far away are rocks with fancy symbols on it?
I’m uncertain about this; maybe sentient experiences are so sacred that they should be lexically privileged over other things that are desirable or undesirable about a Universe.
But, basically, I don’t have any good reason to prefer that you be happy vs. unhappy—I just note that I reliably get happy when I see happy humans and/or lizards and/or begonias and/or androids, and I reliably get unhappy when I see unhappy things, so I prefer to fill Universes with happy things, all else being equal.
Similarly, I feel happy when I see intricate and beautiful works of culture, and unhappy when I read Twilight. It feels like the same kind of happy as the kind of happy I get from seeing happy people. In both cases, all else being equal, I want to add more of it to the Universe.
Am I missing something? What’s the weakest part of this argument?
So, now I’m curious… if tomorrow you discovered some new thing X you’d never previously experienced, and it turned out that seeing X made you feel happier than anything else (including seeing happy things and intricate works of culture), would you immediately prefer to fill Universes with X?
I should clarify that by “fill” I don’t mean “tile.” I’m operating from the point of view where my species’ preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space. if that ever changed, I’d have to think carefully about what things were worth doing on a galactic scale. It’s like the difference between decorating your bedroom and laying out the city streets for downtown—if you like puce, that’s a good enough reason to paint your bedroom puce, but you should probably think carefully before you go influencing large or public areas.
I would also wonder if some new thing made me incredibly happy if perhaps it was designed to do that by someone or something that isn’t very friendly toward me. I would suspect a trap. I’d want to take appropriate precautions to rule out that possibility.
With those two disclaimers, though, yes. If I discovered fnord tomorrow and fnord made me indescribably happy, then I’d suddenly want to put a few billion fnords in the Sirius Sector.
I’m operating from the point of view where my species’ preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space.
Do you think the preferences of your species matter more than preferences of some other species, e.g. intelligent aliens? I think that couldn’t be justified. I’m currently working on a LW article about that.
I haven’t thought much about it! I look forward to reading your article.
My point above was simply that even if my whole species acted like me, there would still be plenty of room left in the Universe for a diversity of goods. Barring a truly epic FOOM, the things humans do in the near future aren’t going to directly starve other civilizations out of a chance to get the things they want. That makes me feel better about going after the things I want.
(nods) Makes sense. If I offered to, and had the ability to, alter your brain so that something that already existed in vast quantities—say, hydrogen atoms—made you indescribably happy, and you had taken appropriate precautions to rule out the possibility that I wasn’t very friendly towards you and that this wasn’t a trap, would you agree?
I think it’s a category error to see ethics as only being about what one likes (even if that involves some work getting rid of obvious contradictions). In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with. Surely that’s not satisfying! If evolution had equipped us with a strong preference to generate paperclips, should our ethicists then be debating how to best fill the universe with paperclips? Much rather, we should be trying to come up with better reasons than mere intuitions and arbitrarily (by blind evolution) shaped preferences.
If there was no suffering and no happiness, I might agree with ethics just being about whatever you like, and I’d add that one might as well change what one likes and do whatever, since nothing then truly mattered. But it’s a fact that suffering is intrinsically awful, in the only way something can be, for some first person point of view. Of pain, one can only want one thing: That it stops. I know this about my pain as certainly as I know anything. And just because some other being’s pain is at another spatio-temporal location doesn’t change that. If I have to find good reasons for the things I want to do in life, there’s nothing that makes even remotely as much sense as trying to minimize suffering. Especially if you add that caring about my future suffering might not be more rational than caring about all future suffering, as some views on personal identity imply.
In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with
I used to worry about that a lot, and then AndrewCritch explained at minicamp that the statement “I should do X” can mean “I want to want to do X.” In other words, I currently prefer to eat industrially raised chicken sometimes. It is a cold hard fact that I will frequently go to a restaurant that primarily serves torture-products, give them some money so that they can torture some more chickens, and then put the dead tortured chicken in my mouth. I wish I didn’t prefer to do that. I want to eat Subway footlongs, but I shouldn’t eat Subway footlongs. I aspire not to want to eat them in the future.
Also check out the Sequences article “Thou Art Godshatter.” Basically, we want any number of things that have only the most tenuous ties to evolutionary drives. Evolution may have equipped me with an interest in breasts, but it surely is indifferent to whether the lace on a girlfriend’s bra is dyed aquamarine and woven into a series of cardioids or dyed magenta and woven into a series of sinusoidal spirals—whereas I have a distinct preference. Eliezer explains it better than I do.
I’m not sure “intriniscally awful” means anything interesting. I mean, if you define suffering as an experience E had by person P such that P finds E awful, then, sure, suffering is intrinsically awful. But if you don’t define suffering that way, then there are at least some beings that won’t find a given E awful.
But suffering is bad no matter your basic preference architecture. It takes the arbitrariness of out ethics when it’s applicable to all that. Suffering is bad (for the first person point of view experiencing it) in all hypothetical universes. Well, by definition. Culture isn’t. Biological complexity isn’t. Biodiversity isn’t.
Even if it’s not all that matters, it’s a good place to start. And a good way to see whether something else really matters too is to look whether you’d be willing to trade a huge amount of suffering for whatever else you consider to matter, all else being equal (as I did in the example about the planet full of artifacts).
Yes, basically everyone agrees that suffering is bad, and reducing suffering is valuable. Agreed.
And as you say, for most people there are things that they’d accept an increase in suffering for, which suggests that there are also other valuable things in the world.
The idea of using suffering-reduction as a commensurable common currency for all other values is an intriguing one, though.
It’s a good and thoughtful post.
I wonder if it makes sense to model a separate variable in the global utility function for “culture.” In other words, I think the value I place on a hypothetical society runs something like
)%20+%20log(U_c))., where x is each individual person’s individual utility, and c is the overall cultural level.A society where a million people each enjoy reading the Lord of the Rings but there are no other books would have high sigma[U(x)] and low U(c); a society where a hundred people each enjoy reading a unique book would have low total U(x) but high U(c).
That would help model the intuition that culture, even in the abstract, is worth trading off against individual happiness. I think I would prefer a Universe in which the Lord of the Rings were encoded into a durable piece of stone but otherwise had nothing else in it to a Universe in which there was a thriving colony of a few hundred cells of plankton but otherwise nothing else in it, even if there were nobody around to read the stone. Many economists would call that irrational—but like the OP, I reject the premise that my individual utility function for the state of the world has to break down into other people’s individual welfare.
I’ll accept the intuition, but culture seems even harder to quantify than individual welfare—and the latter isn’t exactly easy. I’m not sure what we should be summing over even in principle to arrive at a function for cultural utility, and I’m definitely not sure if it’s separable from individual welfare.
One approach might be to treat cultural artifacts as fractions of identity, an encoding of their creators’ thoughts waiting to be run on new hardware. Individually they’d probably have to be considered subsapient (it’s hard to imagine any transformation that could produce a thinking being when applied to Lord of the Rings), but they do have the unique quality of being transmissible. That seems to imply a complicated value function based partly on population: a populous world containing Lord of the Rings without its author is probably enriched more than one containing a counterfactual J.R.R. Tolkien that never published a word. I’m not convinced that this added value need be positive, either: consider a world containing one of H.P. Lovecraft’s imagined pieces of sanity-destroying literature. Or your own least favorite piece of real-life media, if you’re feeling cheeky.
How about a universe with one planet full of inaminate cultural artifacts of “great artistic value”, and, on another planet that’s forever unreachable, a few creatures in extreme suffering? If you make the cultural value on the artifact planet high enough, it would seem to justify the suffering on the other planet, and you’d then have to prefer this to an empty universe, or one with insentient plankton. But isn’t that absurd? Why should creatures suffer lives not worth living just because somewhere far away are rocks with fancy symbols on it?
Because I like rocks with fancy symbols on them?
I’m uncertain about this; maybe sentient experiences are so sacred that they should be lexically privileged over other things that are desirable or undesirable about a Universe.
But, basically, I don’t have any good reason to prefer that you be happy vs. unhappy—I just note that I reliably get happy when I see happy humans and/or lizards and/or begonias and/or androids, and I reliably get unhappy when I see unhappy things, so I prefer to fill Universes with happy things, all else being equal.
Similarly, I feel happy when I see intricate and beautiful works of culture, and unhappy when I read Twilight. It feels like the same kind of happy as the kind of happy I get from seeing happy people. In both cases, all else being equal, I want to add more of it to the Universe.
Am I missing something? What’s the weakest part of this argument?
So, now I’m curious… if tomorrow you discovered some new thing X you’d never previously experienced, and it turned out that seeing X made you feel happier than anything else (including seeing happy things and intricate works of culture), would you immediately prefer to fill Universes with X?
I should clarify that by “fill” I don’t mean “tile.” I’m operating from the point of view where my species’ preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space. if that ever changed, I’d have to think carefully about what things were worth doing on a galactic scale. It’s like the difference between decorating your bedroom and laying out the city streets for downtown—if you like puce, that’s a good enough reason to paint your bedroom puce, but you should probably think carefully before you go influencing large or public areas.
I would also wonder if some new thing made me incredibly happy if perhaps it was designed to do that by someone or something that isn’t very friendly toward me. I would suspect a trap. I’d want to take appropriate precautions to rule out that possibility.
With those two disclaimers, though, yes. If I discovered fnord tomorrow and fnord made me indescribably happy, then I’d suddenly want to put a few billion fnords in the Sirius Sector.
Do you think the preferences of your species matter more than preferences of some other species, e.g. intelligent aliens? I think that couldn’t be justified. I’m currently working on a LW article about that.
I haven’t thought much about it! I look forward to reading your article.
My point above was simply that even if my whole species acted like me, there would still be plenty of room left in the Universe for a diversity of goods. Barring a truly epic FOOM, the things humans do in the near future aren’t going to directly starve other civilizations out of a chance to get the things they want. That makes me feel better about going after the things I want.
(nods) Makes sense.
If I offered to, and had the ability to, alter your brain so that something that already existed in vast quantities—say, hydrogen atoms—made you indescribably happy, and you had taken appropriate precautions to rule out the possibility that I wasn’t very friendly towards you and that this wasn’t a trap, would you agree?
Sure! That sounds great. Thank you. :-)
I think it’s a category error to see ethics as only being about what one likes (even if that involves some work getting rid of obvious contradictions). In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with. Surely that’s not satisfying! If evolution had equipped us with a strong preference to generate paperclips, should our ethicists then be debating how to best fill the universe with paperclips? Much rather, we should be trying to come up with better reasons than mere intuitions and arbitrarily (by blind evolution) shaped preferences.
If there was no suffering and no happiness, I might agree with ethics just being about whatever you like, and I’d add that one might as well change what one likes and do whatever, since nothing then truly mattered. But it’s a fact that suffering is intrinsically awful, in the only way something can be, for some first person point of view. Of pain, one can only want one thing: That it stops. I know this about my pain as certainly as I know anything. And just because some other being’s pain is at another spatio-temporal location doesn’t change that. If I have to find good reasons for the things I want to do in life, there’s nothing that makes even remotely as much sense as trying to minimize suffering. Especially if you add that caring about my future suffering might not be more rational than caring about all future suffering, as some views on personal identity imply.
I used to worry about that a lot, and then AndrewCritch explained at minicamp that the statement “I should do X” can mean “I want to want to do X.” In other words, I currently prefer to eat industrially raised chicken sometimes. It is a cold hard fact that I will frequently go to a restaurant that primarily serves torture-products, give them some money so that they can torture some more chickens, and then put the dead tortured chicken in my mouth. I wish I didn’t prefer to do that. I want to eat Subway footlongs, but I shouldn’t eat Subway footlongs. I aspire not to want to eat them in the future.
Also check out the Sequences article “Thou Art Godshatter.” Basically, we want any number of things that have only the most tenuous ties to evolutionary drives. Evolution may have equipped me with an interest in breasts, but it surely is indifferent to whether the lace on a girlfriend’s bra is dyed aquamarine and woven into a series of cardioids or dyed magenta and woven into a series of sinusoidal spirals—whereas I have a distinct preference. Eliezer explains it better than I do.
I’m not sure “intriniscally awful” means anything interesting. I mean, if you define suffering as an experience E had by person P such that P finds E awful, then, sure, suffering is intrinsically awful. But if you don’t define suffering that way, then there are at least some beings that won’t find a given E awful.
(shrug) I agree that suffering is bad.
It doesn’t follow that the only thing that matters is reducing suffering.
But suffering is bad no matter your basic preference architecture. It takes the arbitrariness of out ethics when it’s applicable to all that. Suffering is bad (for the first person point of view experiencing it) in all hypothetical universes. Well, by definition. Culture isn’t. Biological complexity isn’t. Biodiversity isn’t.
Even if it’s not all that matters, it’s a good place to start. And a good way to see whether something else really matters too is to look whether you’d be willing to trade a huge amount of suffering for whatever else you consider to matter, all else being equal (as I did in the example about the planet full of artifacts).
Yes, basically everyone agrees that suffering is bad, and reducing suffering is valuable. Agreed.
And as you say, for most people there are things that they’d accept an increase in suffering for, which suggests that there are also other valuable things in the world.
The idea of using suffering-reduction as a commensurable common currency for all other values is an intriguing one, though.