Facebook comment I wrote in February, in response to the question ‘Why might having beauty in the world matter?’:
I assume you’re asking about why it might be better for beautiful objects in the world to exist (even if no one experiences them), and not asking about why it might be better for experiences of beauty to exist.
[… S]ome reasons I think this:
1. If it cost me literally nothing, I feel like I’d rather there exist a planet that’s beautiful, ornate, and complex than one that’s dull and simple—even if the planet can never be seen or visited by anyone, and has no other impact on anyone’s life. This feels like a weak preference, but it helps get a foot in the door for beauty.
(The obvious counterargument here is that my brain might be bad at simulating the scenario where there’s literally zero chance I’ll ever interact with a thing; or I may be otherwise confused about my values.)
2. Another weak foot-in-the-door argument: People seem to value beauty, and some people claim to value it terminally. Since human value is complicated and messy and idiosyncratic (compare person-specific ASMR triggers or nostalgia triggers or culinary preferences) and terminal and instrumental values are easily altered and interchanged in our brain, our prior should be that at least some people really do have weird preferences like that at least some of the time.
(And if it’s just a few other people who value beauty, and not me, I should still value it for the sake of altruism and cooperativeness.)
3. If morality isn’t “special”—if it’s just one of many facets of human values, and isn’t a particularly natural-kind-ish facet—then it’s likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else’s happiness inherently, without needing to experience or know about it myself, it then becomes harder to say why I can’t value non-conscious states inherently; and “beauty” is an obvious candidate. My preferences aren’t all about my own experiences, and they aren’t simple, so it’s not clear why aesthetic preferences should be an exception to this rule.
4. Similarly, if phenomenal consciousness is fuzzy or fake, then it becomes less likely that our preferences range only and exactly over subjective experiences (or their closest non-fake counterparts). Which removes the main reason to think unexperienced beauty doesn’t matter to people.
Combining the latter two points, and the literature on emotions like disgust and purity which have both moral and non-moral aspects, it seems plausible that the extrapolated versions of preferences like “I don’t like it when other sentient beings suffer” could turn out to have aesthetic aspects or interpretations like “I find it ugly for brain regions to have suffering-ish configurations”.
Even if consciousness is fully a real thing, it seems as though a sufficiently deep reductive understanding of consciousness should lead us to understand and evaluate consciousness similarly whether we’re thinking about it in intentional/psychologizing terms or just thinking about the physical structure of the corresponding brain state. We shouldn’t be more outraged by a world-state under one description than under an equivalent description, ideally.
But then it seems less obvious that the brain states we care about should exactly correspond to the ones that are conscious, with no other brain states mattering; and aesthetic emotions are one of the main ways we relate to things we’re treating as physical systems.
As a concrete example, maybe our ideal selves would find it inherently disgusting for a brain state that sort of almost looks conscious to go through the motions of being tortured, even when we aren’t the least bit confused or uncertain about whether it’s really conscious, just because our terminal values are associative and symbolic. I use this example because it’s an especially easy one to understand from a morality- and consciousness-centered perspective, but I expect our ideal preferences about physical states to end up being very weird and complicated, and not to end up being all that much like our moral intuitions today.
Addendum: As always, this kind of thing is ridiculously speculative and not the kind of thing to put one’s weight down on or try to “lock in” for civilization. But it can be useful to keep the range of options in view, so we have them in mind when we figure out how to test them later.
Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to “which expressed human preferences are valid?” to be anything other than “all of them”. There’s a common pattern in metaethics which looks like:
1. People seem to have preference X
2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.
3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.
In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.
Facebook comment I wrote in February, in response to the question ‘Why might having beauty in the world matter?’:
I assume you’re asking about why it might be better for beautiful objects in the world to exist (even if no one experiences them), and not asking about why it might be better for experiences of beauty to exist.
[… S]ome reasons I think this:
1. If it cost me literally nothing, I feel like I’d rather there exist a planet that’s beautiful, ornate, and complex than one that’s dull and simple—even if the planet can never be seen or visited by anyone, and has no other impact on anyone’s life. This feels like a weak preference, but it helps get a foot in the door for beauty.
(The obvious counterargument here is that my brain might be bad at simulating the scenario where there’s literally zero chance I’ll ever interact with a thing; or I may be otherwise confused about my values.)
2. Another weak foot-in-the-door argument: People seem to value beauty, and some people claim to value it terminally. Since human value is complicated and messy and idiosyncratic (compare person-specific ASMR triggers or nostalgia triggers or culinary preferences) and terminal and instrumental values are easily altered and interchanged in our brain, our prior should be that at least some people really do have weird preferences like that at least some of the time.
(And if it’s just a few other people who value beauty, and not me, I should still value it for the sake of altruism and cooperativeness.)
3. If morality isn’t “special”—if it’s just one of many facets of human values, and isn’t a particularly natural-kind-ish facet—then it’s likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else’s happiness inherently, without needing to experience or know about it myself, it then becomes harder to say why I can’t value non-conscious states inherently; and “beauty” is an obvious candidate. My preferences aren’t all about my own experiences, and they aren’t simple, so it’s not clear why aesthetic preferences should be an exception to this rule.
4. Similarly, if phenomenal consciousness is fuzzy or fake, then it becomes less likely that our preferences range only and exactly over subjective experiences (or their closest non-fake counterparts). Which removes the main reason to think unexperienced beauty doesn’t matter to people.
Combining the latter two points, and the literature on emotions like disgust and purity which have both moral and non-moral aspects, it seems plausible that the extrapolated versions of preferences like “I don’t like it when other sentient beings suffer” could turn out to have aesthetic aspects or interpretations like “I find it ugly for brain regions to have suffering-ish configurations”.
Even if consciousness is fully a real thing, it seems as though a sufficiently deep reductive understanding of consciousness should lead us to understand and evaluate consciousness similarly whether we’re thinking about it in intentional/psychologizing terms or just thinking about the physical structure of the corresponding brain state. We shouldn’t be more outraged by a world-state under one description than under an equivalent description, ideally.
But then it seems less obvious that the brain states we care about should exactly correspond to the ones that are conscious, with no other brain states mattering; and aesthetic emotions are one of the main ways we relate to things we’re treating as physical systems.
As a concrete example, maybe our ideal selves would find it inherently disgusting for a brain state that sort of almost looks conscious to go through the motions of being tortured, even when we aren’t the least bit confused or uncertain about whether it’s really conscious, just because our terminal values are associative and symbolic. I use this example because it’s an especially easy one to understand from a morality- and consciousness-centered perspective, but I expect our ideal preferences about physical states to end up being very weird and complicated, and not to end up being all that much like our moral intuitions today.
Addendum: As always, this kind of thing is ridiculously speculative and not the kind of thing to put one’s weight down on or try to “lock in” for civilization. But it can be useful to keep the range of options in view, so we have them in mind when we figure out how to test them later.
Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to “which expressed human preferences are valid?” to be anything other than “all of them”. There’s a common pattern in metaethics which looks like:
1. People seem to have preference X
2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.
3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.
In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.
Old discussion of this on LW: https://www.lesswrong.com/s/fqh9TLuoquxpducDb/p/synsRtBKDeAFuo7e3