Ritual 2012: A Moment of Darkness
This is the second post of the 2012 Ritual Sequence. The Introduction post is here.
This is… the extended version, I suppose, of a speech I gave at the Solstice.
The NYC Solstice Weekprior celebration begins bright and loud, and gradually becomes somber and poignant. Our opening songs are about the end of the world, but in a funny, boisterous manner that gets people excited and ready to sing. We gradually wind down, dimming lights, extinguishing flames. We turn to songs that aren’t sad but are more quiet and pretty.
And then things get grim. We read Beyond the Reach of God. We sing songs about a world where we are alone, where there is nothing protecting us, and where we somehow need to survive and thrive, even when it looks like the light is failing.
We extinguish all but a single candle, and read an abridged version of the Gift We Give to Tomorrow, which ends like this:
Once upon a time,
far away and long ago,
there were intelligent beings who were not themselves intelligently designed.
Once upon a time,
there were lovers, created by something that did not love.
Once upon a time,
when all of civilization was a single galaxy,
A single star.
A single planet.
A place called Earth.
Once upon a time.
And then we extinguish that candle, and sit for a moment in the darkness.
This year, I took that time to tell a story.
It’s included in the 2012 Ritual Book. I was going to post it at the end of the sequence. But I realized that it’s actually pretty important to the “What Exactly is the Point of Ritual?” discussion. So I’m writing a more fleshed out version now, both for easy reference and for people who don’t feel like hunting through a large pdf to find it.
It’s a bit longer, in this version—it’s what I might have said, if time wasn’t a constraint during the ceremony.
A year ago, I started planning for tonight. In particular, for this moment, after the last candle is snuffed out and we’re left alone in the dark with the knowledge that our world is unfair and that we have nobody to help us but each other.
I wanted to talk about death.
My grandmother died two years ago. The years leading up to her death were painful. She slowly lost her mobility, until all she could do was sit in her living room and hope her family would come by to visit and talk to her.
Then she started losing her memory, so she had a hard time even having conversations at all. We tried to humor her, but there’s only so many times you can repeat the same thought in a five minute interval before your patience wears thin, and it shows, no matter how hard you try.
She lost her rationality, regressing into a child who would argue petulantly with my mother about what to eat, and when to exercise, and visit her friends. She was a nutritionist, she knew what she was supposed to eat and why. She knew how to be healthy. And she wanted to be healthy. But she lost her ability to negotiate her near term and long term desires on her own.
Eventually even deciding to eat at all became painful. Eventually even forming words became exhausting.
Eventually she lost not just her rationality, but her agency. She stopped making decisions. She lay on her bed in the hospital, not even having the strength to complain anymore. My mother got so excited on days when she argued petulantly because at least she was doing *something*.
She lost everything that I thought made a person a person, and I stopped thinking of her as one.
Towards the end of her life, I was visiting her at the hospital. I was sitting next to her, being a dutiful grandson. Holding her hand because I knew she liked that. But she seemed like she was asleep, and after 10 minutes or so I got bored and said “alright, I’m going to go find Mom now. I’ll be back soon.”
And she squeezed my hand, and said “No, stay.”
Those two words were one of the last decisions she ever made. One of the last times she had a desire about how her future should be. She made an exhausting effort to turn those desires into words and then breath those words into sounds so that her grandson would spend a little more time with her.
And I was so humiliated that I had stopped believing that inside of this broken body and broken mind was a person who still desperately wanted to be loved.
She died a week or two later.
Her funeral was a Catholic Mass. My mom had made me go to Mass as a child. It always annoyed me. But in that moment, I was so grateful to be able to hold hands with a hundred people, for all of us to speak in unison, without having to think about it, and say:
“Our father, who art in heaven, hallowed by thy name. Thy kingdom come, thy will be done, on earth as it is in heaven. Give us this day our daily bread, and forgive us of our trespasses, as we forgive those who trespass against us. And lead us not into temptation, but deliver us from evil.”
I’m not sure if having that one moment of comforting unity was worth 10 years of attending Catholic mass.
It’s a legitimately hard question. I don’t know the answer.
But I was still so frustrated that this comforting ritual was all based on falsehoods. There’s plenty of material out there you can use to create a beautiful secular funeral, but it’s not just about having pretty or powerful words to say. It’s about about knowing the words already, having them already be part of you and your culture and your community.
Because when somebody dies, you don’t have time or energy for novelty. You don’t want to deal with new ideas that will grate slightly against you just because they’re new. You want cached wisdom that is simple and beautiful and true, that you share with others, so that when something as awful as death happens to you, you have tools to face it, and you don’t have to face it alone.
I was thinking about all that, as I prepared for this moment.
But my Grandmother’s death was a long time ago. I wanted the opportunity to process it in my own way, in a community that shared my values. But it wasn’t really a pressing issue that bore down on me. Dealing with death felt important, but it was a sort of abstract importance.
And then, the second half of this year happened.
A few months ago, an aspiring rationalist friend of mine e-mailed me to tell me that a relative died. They described the experience of the funeral, ways in which it was surprisingly straightforward, and other ways in which it was very intense. My friend had always considered themselves an anti-deathist, but it was suddenly very real to them. And it sort of sank in for me too—death is still a part of this world, and our community doesn’t really have ways to deal with it.
And then, while I was still in the middle of the conversation with that friend, I learned that another friend had lost somebody, that same day.
Later, I would learn that a coworker of mine also lost somebody that day as well.
Death was no longer abstract. It was real, painfully real, even if I myself didn’t know the people who died. My friends were hurting, and I felt their pain.
I wandered off into the night to sing my Stonehenge song by myself. It’s not quite good enough at what I needed it for—I’m not a skilled enough songwriter to write that song, yet. But it’s the only song I know of that attempts to do what I needed. To grimly acknowledge this specific adversary, to not offer any false hope about the inevitability of our victory, but to nonetheless march onward, bitterly determined that not quite so many people will die tomorrow as today.
I came back inside. I chatted with another friend about the experience. She offered me what comfort she could. She attempted to offer some words to the effect of “well, death has a purpose sometimes. It helps you see the good things—”
Gah, I thought.
What’s interesting is that I’m not actually that much of an anti-deathist. I think our community’s obsession with eliminating death without regard for the consequences is potentially harmful. I think there are, quite frankly, worse things in the world. If I had to choose between my Grandmother not dying, and my Grandmother not having to gradually lose everything she thought made her her until her own grandson forgot that she was a person, spending her days wracked with pain, I would probably choose the latter.
But still, I’ve come to accept that death is bad, unequivocally bad, even if some things are worse. And I had sort of forgotten, since I’m often at odds with other Less Wrongers about this, how big the gulf was between us and the rest of the world.
I didn’t hold it against my friend. She meant well, and having someone to talked to helped.
A week later, a friend of hers died.
A week after that, another friend of mine lost somebody.
A week after that, it wasn’t a direct friend of a friend who died, but a local activist was murdered a few blocks from someone’s house, and they cancelled plans with me because they were so upset.
Then a hurricane hit New York. Half the city went dark. While it was unrelated, at least one of my friends experienced a death, of sorts, that week. And even if none of my friends were directly hurt by Hurricane Sandy, you couldn’t escape the knowledge that there were people who weren’t so lucky.
And I went back to my notes I had written for this moment and stared and them and thought...
…
...fuck.
Winter was coming and I didn’t know what to do. Death is coming, and our community isn’t ready. I set out to create a holiday about death and… it turns out that’s a lot of responsibility, actually.
This was important, this was incredibly important and so incredibly hard to handle correctly. We as a community—the New York community, at least—need a way to process what happened to us this year, but what happened to each of us is personal and even though most of share the same values we all deal with death in our own way and… and… and somehow after all of that, after taking a moment to process it, we need to climb back out of that darkness and end the evening feeling joyful and triumphant and proud to be human, without resorting to lies.
…
…
…there’s a lot I don’t know how yet, about what to do, or what to say.
But here’s what I do know:
My grandmother died. But she lived to her late eighties. She had a family of 5 children who loved her. She had a life full of not just fun and travel and adventure but of scientific discovery. She was a dietitian. She helped do research on diabetes. She was an inspiration to women at a time when a woman being a researcher was weird and a big deal. When I say she had a long, full life, I’m not just saying something nice sounding.
My grandmother won at life, by any reasonable standard.
Not everyone gets to have that, but my grandmother did. She was the matriarch of a huge extended family that all came home for Christmas eve each year, and sang songs and shared food and loved each other. She died a few weeks after Christmas, and that year, everyone came to visit, and honestly it was one of the best experiences of my life.
In the dead of winter, each year, two dozen of people came to Poughkeepsie, to a big house sheltered by a giant cottonwood tree, and were able to celebrate *without* worrying about running out of food in the spring. At the darkest time of the year, my mother ran lights up a hundred foot tall pine tree that you could see for miles.
We were able to eat because hundreds of miles away, mechanical plows tilled fields in different climates, producing so much food that we literally could feed the entire world if we could solve some infrastructure and economic problems.
We were able to drive to my grandmother’s house because other mechanical plows crawled through the streets all night, clearing the ice and snow away.
Some of us were able to come to my grandmothers house from a thousand miles away, flying through the sky, higher than ancient humans even imagined angels might live.
And my Grandmother died in her late eighties, but she also *didn’t* die when she was in her 70s and the cancer first struck her. Because we had chemotherapy, and host of other tools to deal with it.
And the most miraculous amazing thing is that this isn’t a miracle. This isn’t a mystery. We know how it came to be, and we have the power to learn to understand it even better, and do more.
In this room, right now, are people who take this all seriously. Dead seriously, who don’t just shout “Hurrah humanity” because shouting things together in a group is fun.
We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize *everything ever*. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.
And while they aren’t in this room, there are people we know who would be here if they could, who are doing their part to try and solve this whole death problem once and for all.
And I don’t know whether and how well any of us are going to succeed at any of these things, but...
God damn, people. You people are amazing, and even if only one of you made a dent in some of the problems you’re working on, that… that would just be incredible.
And there are people in this room who aren’t working on anything that grandiose. People who aren’t trying to solve death or save the world from annihilation or alleviate suffering on a societal level. But who spend their lives making art. Music. Writing things sometimes.
People who fill their world with beauty and joy and enthusiasm, and pies and hugs and games and… and I don’t have time to give a shout out to everyone in the room but you all know who you are.
This room is full of people who spend their lives making this world less ugly, less a sea of blood and violence and mindless replication. People who are working to make tomorrow brighter than today, in one way or another.
And I am so proud to know all of you, to have you be a part of my life, and to be a part of yours.
I love you.
You make this world the sort of place I’d want to keep living, forever, if I could.
The sort of world I’d want to take to the stars.
God, I hope I’m not the only one who cried at this.
You are not.
By now I’ve started to loose track—did you attend the actual event or not? I wasn’t sure how well the speech would translate to text, even with some additional polishing. I deliberately didn’t rehearse it much so that it was particularly raw that evening. Curious how it holds up here.
Didn’t come this year I’m afraid, but the text was pretty touching. It was very well written and excellently conveyed the pride you place in humanity.
I did too. *hugs*
Correct me if I’m wrong, but it looks like you’re talking about anti-deathism (weak or strong) as if it was a defining value of the LessWrong community. This bothers me.
If you’re successful, these rituals will become part of the community identity, and I personally would rather LW tried to be about rationality and just that as much as it can. Everything else that correlates with membership—transhumanism, nerdiness, thinking Eliezer is awesome—I would urge you not to include in the rituals. It’s inevitable that they’d turn up, but I wouldn’t give them extra weight by including them in codified documents.
As an analogy, one of the things that bugged me about Orthodox Judaism was that it claims to be about keeping the Commandments, but there’s a huge pile of stuff that’s done just for tradition’s sake, that isn’t commanded anywhere (no, not even in the Oral Lore or by rabbinical decree).
What would a ritual that’s just about rationality and more complex than a group recitation of the Litany of Tarsky look like?
Religious groups confess their sins. A ritual of rational confession might involve people going around a circle raising, examining, and discussing errors they’ve made and intend to better combat in the future (perhaps with a theme, like a specific family of biases).
You can also sing songs that are generically about decision theory, metaethics, and epistemology, rather than about specific doctrines. You’d have to write ’em first, though.
I genuinely don’t know how I feel about the “rational confession” idea. On the one hand, the idea of “confession of sins” squicks me out a bit, even though I enjoy other rituals; it reminds me too much of highly authoritarian/groupthink-y religions. On the other hand, having a place to discuss one’s own biases and plan ways to avoid them sounds seriously useful, and would probably be a helpful tradition to have.
It sounds like you like the content, but not the way I framed it. That’s fine. I only framed it like a religious ritual to better fit Adelene’s question; in practice we may not even want to think of it as a ‘ritual,’ i.e., we may not want to adorn or rigidify it beyond its recurrence as a practice.
Well, it depends what you mean by “defining value”. The LW community includes all sorts of stuff that simply becomes much more convincing/obvious/likely when you’re, well, more rational. Atheism, polyamory, cryonics … there’s quite a few of these beliefs floating around. That seems like it’s as it should be; if rationality didn’t cause you to change your beliefs, it would be meaningless, and if those beliefs weren’t better correlated with reality, it would be useless.
I’m especially intrigued that you list polyamory among the beliefs that become more “convincing/obvious/likely” with greater rationality. Just to clarify: on your view, is the fact that I have no particular desire to have more than one lover in my life evidence that I am less rational than I would be if I desired more lovers? Why ought I believe that?
Not speaking for them, but what I do actually think is that there is some portion of the population that would gravitate towards polyamory, but don’t because of cached thinking, so increasing rationality would increase the number of polyamorous people.
It came up in conversation with my second cousin today that I have four boyfriends who all know about each other and get along and can have as many girlfriends as they want. My second cousin had never heard of anything like this, but it sounded immediately sensible and like a better way of doing things to him. Just being in a position to learn that an option exists will increase your odds of doing it.
Well, sure; I agree.
Let me put it this way: it’s one thing to say “some people like X, some people don’t like X, and rationalists are more likely to consider what they actually want and how to achieve it without giving undue weight to social convention.” It’s a different thing to say “rational people like X, and someone’s stance towards X is significant evidence of their rationality.”
This community says the second thing rather unambiguously about X=cryonics and X=atheism. So when cryonics, atheism, and polyamory are grouped together, that seems like significant evidence that the second thing is also being said about X=polyamory.
So I figured it was worth clarifying.
Two more points:
It’s possible for a trait to be strong evidence both for extreme rationality and for extreme irrationality. (Some traits are much more commonly held among the extremely reasonable and the extremely unreasonable than among ‘normals;’ seriously preparing for apocalyptic scenarios, for instance. Perhaps polyamory is one of these polarizing traits.)
Sometimes purely irrational behaviors are extremely strong evidence for an agent’s overall rationality.
But those are only different if your ‘significant’ qualifier in ‘significant evidence’ is much stronger than your ‘more likely’ threshold. In other words, the difference is only quantitative. If the rate of polyamory is significantly higher among rationalists than among non-rationalists, then that’s it; the question is resolved; polyamory just is evidence of rationality. This is so even if nearly all polyamorous people are relatively irrational. It’s also so even if polyamory is never itself a rational choice; all that’s required is a correlation.
EDIT: Suppose, for instance, that there are 20 rationalists in a community of 10,020; and 2 of the rationalists are polyamorous; and 800 of the non-rationalists are polyamorous. Then, all else being equal, upon meeting a poly person P a perfect Bayesian who knew the aforementioned facts would need to strongly update in favor of P being a rationalist, even knowing that only 2 of the 802 poly people in the community are rationalists.
Yup, all of that is certainly true.
Similarly, there is likely some number N such that my weight being in or above the Nth percentile of the population is evidence of rationality (or of being a rationalist; the terms seem to be being used interchangeably here).
So, I started out by observing that there seemed to be a property that cryonics and atheism shared with respect to this community, which I wasn’t sure polyamory also shared, which is why I made the initial comment.
I was in error to describe the property I was asking about as being primarily about evidence, and I appreciate you pointing that out.
In retrospect, I think what I’m observing is that within this community atheism and cryonics have become group markers of virtue, in a way that having a weight above the abovementioned Nth percentile is not a group marker of virtue (though it may be very strong evidence of group membership). And what I was really asking was whether polyamory was also considered a group marker of virtue.
Looking at the flow of this discussion (not just in this branch) and the voting patterns on it, I conclude that yes, it is.
We also have to be careful again about whether by ‘mark of virtue’ we mean an indicator of virtue (because polyamory might correlate with virtue without being itself virtuous), or whether by ‘mark of virtue’ we mean an instance of virtue.
In other words, all of this talk is being needlessly roundabout: What we really want to know, I think, is whether polyamory is a good thing. Does it improve most people’s lives? How many non-polyamorous people would benefit from polyamory? How many non-polyamorous people should rationally switch to polyamory, given their present evidence? And do people (or rationalists) tend to accept polyamory for good reasons? Those four questions are logically distinct.
Perhaps the last two questions are the most relevant, since we’re trying to determine not just whether polyamorous people happen to win more or be rationalists more often, but whether their polyamory is itself rationally motivated (and whether their reasons scale to the rest of the community). So I think the question you intend to ask is whether polyamorous people (within the LessWrong community, at a minimum) have good reason to be polyamorous, and whether the non-polyamorous people have good reason to be non-polyamorous.
This question is very analogous to the sort of question we could ask about cryonics. Are the LessWrongers who don’t want to be frozen being irrational—succumbing to self-deception, say? Or are the more cryonics-happy LessWrongers being irrational? Or are they both being rational, and they just happen to have different core preferences?
I agree that “whether polyamory (or cryonics, or whatever) is a good thing” is a thing we want to know. Possibly even the thing we really want to know, as you suggest.
When you unpack the question in terms of improving lives, benefiting people, etc. you’re implicitly adopting a consequentialist stance, where “is polyamory a good thing” equates to “does polyamory have the highest expected value”? I endorse this completely.
In my experience, it has a high positive expected value for some people and a high negative expected value for others, and the highest EV strategy is figure out which sort of person I am and act accordingly.
This is very similar to asking whether a homosexual sex life has the highest expected value, actually, or (equivalently) whether a homosexual sex life is a good thing: it definitely is for some people, and definitely is not for others, and the highest-EV strategy is to pick a sex life that corresponds to the sort of person I am.
All of that said, I do think there’s a difference here between unpacking “is polyamory a good thing?” as “does polyamory has the highest expected value?” (the consequentialist stance) and unpacking it as “is polyamory the a characteristic practice of virtuous people?” (the virtue-ethicist stance).
Perhaps what I mean, when I talk about markers of virtue, is that this community seems to be adopting a virtue-ethics rather than a consequentialist stance on the subject.
We agree on the higher-level points, so as we pivot toward object-level discussion and actually discuss polyamory, I insist that we begin by tabooing ‘polyamory,’ or stipulating exactly what we mean by it. For instance, by ‘Polyamory is better than monamory for most people.’ we might mean:
Most people have a preference for having multiple simultaneous romantic/sexual partners.
Most people have such a preference, and would live more fulfilling lives if they acknowledged it.
Most people would live more fulfilling lives if they attempted to have multiple romantic/sexual partners.
Most people would live more fulfilling lives if they actually had multiple romantic/sexual partners.
Most people are capable of having multiple romantic/sexual partners if they try, and would live more fulfilling lives in that event.
Most people would live more fulfilling lives if they at least experimented once with having multiple romantic/sexual partners.
Most people would live more fulfilling lives if they were sometimes willing to have multiple romantic/sexual partners.
Some conjunction or disjunction of the above statements.
More generally, we can distinguish between ‘preference polyamory’ (which I like to call polyphilia: the preference for, or openness to, having multiple partners, whether or not one actually has multiple partners currently) and ‘behavioral polyamory’ (which I call multamory: the actual act of being in a relationship with multiple people). We can then cut it even finer, since dispositions and behaviors can change over time. Suppose I have a slight preference for monamory, but am happy to be in poly relationships too. And, even more vexingly, maybe I’ve been in poly relationships for most of my life, but I’m currently in a mono relationship (or single). Am I ‘polyamorous’? It’s just an issue of word choice, but it’s a complex one, and it needs to be resolved before we can evaluate any of these semantic candidates utilitarianly.
And even this is too coarse-grained, because it isn’t clear what exactly it takes to qualify as a ‘romantic/sexual’ partner as opposed to an intimate friend. Nor is it clear what it takes to be a ‘partner;’ it doesn’t help that ‘sexual partner’ has an episodic character in English, while ‘romantic partner’ has a continuous character.
As for virtue ethics: In my experience, ideas like ‘deontology,’ ‘consequentialism,’ and ‘virtue ethics’ are hopeless confusions. The specific kinds of arguments characteristic of those three traditions are generally fine, and generally perfectly compatible with one another. There’s nothing utilitarianly unacceptable about seriously debating whether polyamory produces good character traits and dispositions.
I should note that my original question wasn’t (and wasn’t intended to be) about polyamory, but rather about MugaSofer’s categorizations (and indirectly about LW’s). So from my perspective, I have been having an object-level discussion, give or take.
But, OK, if you want to actually discuss polyamory, I’m OK with that too.
Were I to taboo “polyamory”, I would unpack it as the practice of maintaining romantic relationships with more than one person at a time. I would similarly taboo “monamory” (were such a term in common usage) as the practice of maintaining romantic relationships with exactly one person at a time.
(And, sure, as you say, we can further drill down into this by exploring what “romantic” means, and how it differs from intimate friendship. And for that matter what “person” means and what a “practice” is and what it means to “maintain” a “relationship” and so forth, if we want.)
So by “Polyamory is better than monamory for most people.” I would mean that the practice of maintaining romantic relationships with more than one person at a time is better than the practice of maintaining them with one person at a time, for most people.
And I would also say that someone who is currently involved in romantic relationships with no more than one person is not currently engaging in polyamory, though that’s not to say that they haven’t in the past nor that they won’t in the future. And someone currently involved in romantic relationships with no people is not engaging in monamory either.
The difference between the unpackings you propose seem to have nothing to do with different understandings of “polyamory” or “monamory”, but rather with different understandings of “better”.
I would probably unpack “polyamorous” as applied to a person either as preferring romantic relationships with more than one person at a time or as requiring romantic relationships with more than one person at a time. I don’t really care which meaning gets used but it’s important to agree or conversations tend to derail. (Similar issues arise with whether “homosexual” is understood to exclude people attracted to the opposite sex. Both usages are common, but it’s difficult to talk about homosexuality clearly if we don’t know which usage is in play.)
Here’s what I was trying to get at: Polyamory vs. monamory isn’t a fair fight, because monamory is one relationship type. Polyamory isn’t one relationship type; it’s an umbrella term for thousands of different relationship types, including:
three-person relationships only
n-person relationships only, for higher values
multiple romantic partners, but only one sexual partner
multiple sexual partners, but only one romantic or emotionally intimate partner (‘swinging’)
couples where one partner is strictly monamorous and the other is polyamorous
‘open’ two-person relationships (possibly a limiting case of the primary/secondary distinction, below)
‘closed’ more-than-two-person relationships (polyfidelity)
traditional polygamy and harems
one-night stands only
orgies only
fully connected polyamorous networks (transitivity holds for relationships)
‘rings’ or ‘chains’ (i.e., minimally connected polyamorous networks)
networks with ‘clusters’ of relative intimacy or relative exclusivity (e.g., quads)
hierarchical polyamory: ‘primary’ partners vs. ‘secondary’ (vs. ‘tertiary,’ etc.), including mistresses and concubines
treating everyone with equal romantic/sexual intimacy (omnamory)
treating an extremely large group of people (e.g., a large commune or town or cult) with equal romantic/sexual intimacy (‘tribes’)
regular polyamory broken by occasional bouts of monamory
regular monamory broken by occasional bouts of polyamory
homosexual vs. bisexual vs. heterosexual polyamorous networks
long-term polyamorous relationships
short-term polyamorous relationships
‘rotating’ polyamorous arrangements (i.e., one schedules specific blocs of time for different partners, with ‘shifts’ lasting days or even months at a time)
celibate or asexual polyamory
polyamorous relationships with a distinctive significance or theme or interest, e.g.: ‘family’ polyamory; religious polyamory; BDSM polyamory; marriage polyamory....
All of the above are off-the-table for traditional monogamous pairing. What are the odds that a majority of people would just happen to converge on exactly one ideal relationship type? If we expect people to have diverse preferences, we should expect polyamory to dominate monamory.
I suppose.
That said, not all two-person relationships are the same type of relationship either. My relationship with my husband is not very much like my mom’s (former) relationship with hers, despite both relationships being monogamous… indeed, it has more in common with several of my friends’ poly relationships.
That said, though, we can certainly ask whether there are more ways for N people to be in relationship at a time than for 2 people to be in relationship at a time? Yeah, I would expect so, I guess. (The prospect of itemizing them, as you seem to be trying to do here, seems both daunting and not terribly useful, though perhaps entertaining.)
Does the difference actually matter that much, in terms of what leaves people better off? Maybe. I’m not really sure. I’m not even sure how to approach the question.
Certainly, the more willing people are to explore a wide range of relationship-space, and the more able they are to recognize what leaves them better off in a relationship, the more likely they are to find a way of being in relationship that leaves them better off. But this seems no more (though no less) true for being willing to experiment with polyamory as for being willing to experiment with their nonpreferred gender as for willing to experiment with various sexual kinks as for many other things.
If you’re counting those as separate, you should also count homosexual vs heterosexual monamorous relationships, and long-term vs short-term monamorous relationships. :-)
Traditional Euro/American monogamy treats homosexual and short-term relationships as deviations from the ideal, not as legitimate alternative ideals. One does not choose or prefer to be a serial monogamist.
But it’s possible traditional monogamy is an unfair straw-manning of monamory in this discussion. I’m happy to include gay/lesbian couples, and perhaps some intersex/genderqueer ones while we’re at it. But I’m not so sure deliberately short-term, serial relationships should go in the monamorous camp rather than the polyamorous one; it seems to slightly break the spirit of monamory, as usually conceived, to not even aspire to have non-short-term relationships. And how do we draw the borders between relationships in serial monamory, to keep it from bleeding over into polyamory at the boundaries? To avoid haggling over definitions, perhaps we should treat serial monamory as a third category, distinct from both ‘committed’ monamory and ‘simultaneous’ polyamory.
I would certainly agree that as the set of ways to be in relationship we refer to as “monogamy” gets smaller and smaller, the odds that it’s optimal for a given person dwindle.
I’ve utterly lost sight of why that’s interesting, or why we need all these labels.
Can you back up a little and summarize your goal, here?
My two main goals are to draw attention to why we privilege ‘monogamy wins’ or ‘monamory wins’ out of hypothesis-space in the first place, and to evaluate the usefulness of the categories under discussion, as a prolegomena to settling the duly clarified questions empirically.
We can start compiling research, and proposing new research, that settles this question; but we can only do so productively if we’ve clarified the question enough to isolate the plausibly very important variables. Surely some arities of relationships (2-person, 3-person, 4-person...) are better than others; but quality of relationship will also vary based on the structure of the network (degree and kind of connectedness), the temporal dynamics of the network, the level (and kind) of experience and honesty and affection of the participants, the sexual and romantic behavior, and for that matter how the relationship type is treated in the relevant society. (E.g., monamory could be more fun in our culture even if polyamory has more fun-potential in otherwise preferable hypothetical cultures.) Asking questions like these is more interesting, more clear, and more answerable than just ‘is polyamory better than monogamy?’.
If the research is fine-grained enough, it should also address the standard deviation from the ‘ideal relationship type,’ and discover if there are distinct macro-level population clusters with importantly different preferences, or whether it’s bell curves all the way down.
Gotcha.
WRT why we privilege monogamy: well, we certainly haven’t always done so, so were I interested in the question I’d probably look at the history of marriage and the decline of polygamy and see what other factors were in play at the time.
WRT researching relationship quality, I would probably start by asking how I can tell a high-quality relationship apart from a low-quality relationship, then by going out in the world and seeing what kinds of relationship structures correlate with relationship quality.
The relationship between that second question and the “how many partners?” question is tenuous at best, but if I focus on the overlap as we’ve been doing (and thereby ignore the majority of relationship-space) my expectation is that I’d find nominally poly relationships correlate better with relationship quality than nominally monogamous ones, based on my observations about how easy it is to stay in a low-quality monogamous relationship vs. a low-quality poly relationship.
My expectation is also that this would be dwarfed by other factors we would see if we weren’t ignoring the rest of relationship-space.
Also, if we found anything remotely resembling a normal distribution of happiness around a single “ideal relationship type” that wasn’t a confounding artifact around some other factor, I would be amazed. To the point that I’d pretty much have to discard all of my current beliefs about relationships. I would probably defy the data instead.
The trouble here is that different relationship types serve different goals. You’re more likely to come up with a flowchart which takes you from values to recommended relationship type than the claim that relationship type X is better for everyone than all other relationship types.
Yup, I completely agree.
Non sequitur.
People eating what is usually called food should be a minority because there are so many other things that fit in your mouth: stones, grass, computer components …
When two people meet, what is called some kind of handshake / traditional greeting should be a minority because there are so many other potential ways of interacting: touching their head, touching their elbow …
Just because one “umbrella term” unpacks into more constituent types does not imply at all that the cumulative probability of a random human belonging to that umbrella term dominates.
Diverse preferences do not mean each atomic category is equiprobable.
Almost as if there are some … common characteristics, which preclude some i.i.d. dispersion over every conceivable category?
I’m not saying what is or is not the majority “ideal relationship type”. I’m just saying that I don’t think your argument works.
For this to be a relevant analogy, we need to have adequate reason to think that monamory, like food, deserves its privileged position out of possibilityspace. There are specific, overwhelmingly powerful reasons to think that stones, grass, and computer component are inadequate sources of human nutrition; but in the absence of such considerations, it would certainly be unreasonable to simply assume that what we’ve always eaten is the best thing we could possibly eat out of some set of options.
My claim isn’t that no possible evidence could ever show that monamory is better than polyamory. My claim is only that in the absence of strong evidence in either direction, we should expect polamory to win, for the same reason we should expect 99 randomly selected stones to have at least one stone that’s shinier than another 1 randomly selected stone. Some positive argument for monamory is needed; whereas no positive argument is needed to privilege polyamory, so long as it permits orders of magnitudes more varieties of human behavior than does traditional monogamy. It’s because no one specific behavior has yet been shown to have a privileged amount of utility that the broader category gets a head start.
With the caveat that every partner knows about it and consented to the arrangement, and not under duress. Otherwise it would be called cheating and/or coercion, which is far more widespread than polyamory. On a related note, when comparing polyamory to other arrangements, one has to account for the effects of cheating in a monogamous arrangement.
Agreed that consent and transparency are other dimensions along which clarity is useful.
I wouldn’t object to unpacking “polyamory” as requiring a high level of consent and transparency.
Agreed that various degrees of coercion in relationships (of all sorts) are far more common than perfect consent.
Agreed that various degrees of deceit and/or concealment in relationships (of all sorts) are far more common than perfect transparency.
That seems too complex.
Are people happier, for the most part and all other things being equal, having multiple romantic parters, having single romantic partners, or is there too much variation between individuals to generalize?
Dave seems to be saying that there’s too much individual variation to generalize. I don’t think I can answer the question, because I don’t know how to work out all other things being equal: right now it seems to me that lack of social acceptance makes polyamory a pretty bad choice for most people, even if they are inclined towards it. It very seriously limits the number of people you can have relationships with, for example.
I don’t quite think I’m saying this.
I am saying that there are people who, for the most part and all other things being equal, are better off (which is similar to happier, I guess) having multiple romantic partners, and there are other people who (ftmpaaotbe) are better off having single romantic partners. (I also think, though I haven’t previously said, that there are people who ftmpaaotbe are better off having no romantic partners.)
But if you insist on asking whether people are ftmpaaotbe better off with single or multiple partners, without reference to which type of person, I do think the question is answerable. I’m not sure what the answer is. I just think it’s the wrong question to ask, and I don’t care very much about the answer.
This is in a similar sense that I can tell you what a person’s chance of getting pregnant after unprotected sex is, independent of their age or gender, but it’s really a far more useful question to ask if I break the results out by age and gender.
And, yes, I agree that ftmpaaotbe conceals a wealth of trickiness. That said, “this is a bad choice because it’s socially unacceptable” is also a very tricky line of argument.
Okay, gotcha...I actually made a new years resolution not to go on this website anymore, for the sake of time management, so this is my last post. But I think I understand your point! A good note to go out on.
I know how a consequentialist (at least, one operating with the intention of maximizing ‘human values’) would unpack these questions, and I know how we could theoretically look at facts and give answers to ze’s questions.
But how, on earth, would “is polyamory the characteristic of virtuous people” get unpacked? What does “virtuous” mean here and what would it look like for something or someone to be “virtuous”?
I know you probably didn’t mean to get dragged into a conversation about Virtue Ethics, but I’ve seen it mentioned on LW a few times and have always been very curious about its local version.
Well, not being a virtue ethicist myself, I’m probably not the best guy to ask.
My question for virtue ethicists is “well, OK, but how do you tell who is virtuous?”
Then again, a virtue ethicist can just as reasonably ask “well, OK, but how do you tell what consequences are desirable?” to which I, as a consequentialist, essentially reply “I consult my intuitions about value.” Life has more value than death, joy has more value than suffering, growth has more value than stagnation, and so forth. How do I know that? Geez, I dunno. I just know.
Presumably a virtue ethicist can just as readily reply “I consult my intuitions about virtue.” I suppose it’s no less reasonable.
Of course, if polyamory turns out to be the best thing for almost all people, or at least lesswrongers, then a consequentialist would behave the same way.
Also true.
Do you believe polyamory is the best thing for almost all people? (Or at least lesswrongers?)
On balance, no. In fact, I agree with your main point; I was about to add a note to that effect when I saw your comment. Ah well.
(Disclaimer: I am not particularly polyamorous myself, and I’m certainly not in a poly relationship.)
Nitpick: while a significant fraction of rational people are not polyamorous, polyamory could still be better evidence for rationality than atheism. That’s because there is so much atheists around, many of which became atheists for the wrong reasons (being raised as such, rebellion…).
Let’s try some math with a false dichotomy approximation: someone could be Rational (or not), pOlyamorous (or not), and Atheist (or not). We want to measure how much evidence pOlyamory and Atheism are evidence for Rationality, given Background information B. Those are:
Atheism gives 10 log(P(A|RB)÷P(A|¬RB)) decibels of evidence for Rationality
pOlyamory gives 10 log(P(O|RB)÷P(O|¬RB)) decibels of evidence for rationality
Now imagine that B tells the following: “Among the 6 billion people on Earth, about 1 billion are atheists, 10 millions are rational, and 1 million is polyamorous. Every rational people are atheists, and 5% of them are polyamorous”.
So:
P(A|RB) = 1
P(A|¬RB) = 990,000,000÷5,990,000,000 = 99÷599 ~= 0.17
P(O|RB) = 0.05
P(O|¬RB)) = 500,000÷5,999,500,000 = 5÷59995 ~= 8.3×10⁻⁵
Applying the two formulas above, Atheism gives about 8 decibels of evidence for rationality. Polyamory on the other hand, gives about 28. And rationality itself, P(R|B), starts at about −28. pOlyamory is enough to seriously doubt the irRationality of someone, while Atheism doesn’t even raise it above the “should think about it” threshold.
If this is not intuitive, keep in mind that according to B, only 1% of Atheists are Rational, while a whooping 50% of pOlyamorous people are. Well, with those made up numbers anyway. Real numbers are most probably less extreme than that. But I still expect to find more rationalists among a polyamorous sample than among an atheist sample.
Yes, that’s true.
My reply to Robb elsewhere in this thread when he made a similar point is relevant here as well.
I agree. I wouldn’t have worded the original comment that way.
They really aught not to, though, Living forever, like polyamory, is a preference which hinges strictly on a person’s utility function. It’s perfectly possible for a rational agent to not want to live forever, or be polyamorous.
Even if someone considers polyamory and cryonics morally wrong… in this community we often use rational and bayesian interchangeably, but let’s revert to the regular definition for a moment. People who condemn polyamory or cryonics based on cached thoughts are not rational in the true English sense of the word (rational—having reason or justification for belief) but they are not any less epistemically bayesian...it’s not like they have a twisted view of reality itself.
Atheism...well that’s a proposition about the truth, so you could argue that it says something about the individual’s rationality. Trouble is, since God is so ill defined, atheism is poorly defined by extension. So you’d get someone like Einstein claiming not to be an atheist on mostly aesthetic grounds.
Because of our semantic idiocy atheism implies adeism as well, even though deists, atheists, and pantheists have otherwise identical models about observable reality...so I’d hesitate to say that deism/pantheism imply irrationality.
Edit: Also, let’s not confuse intelligence with bayesian-ness. Intelligence correlates with all the beliefs mentioned above largely because it confers resistance to conformity, and that’s the real reason that polyamory and atheism is over-represented at lesswrong. Cryonics...I think that’s a cultural artifact of the close affiliation with the singularity institute.
Regarding polyamory, it could also be founder effect — given that several of the top contributors are openly poly, that both men and women are among them, and so on.
Alicorn used to be mono, and I think so did Eliezer; and the fraction of poly respondents was about the same in the last two surveys, which… some part of my brain tells me is evidence against your hypothesis, but now that I think about it I’m not sure why.
But we’re talking about probability, not possibility. It’s possible for a mammal to be bipedal; but evidence for quadrupedalism is still evidence for being a mammal. Similarly, it’s possible to be irrational and polyamorous; but if the rate of polyamory is greater among rationalists than among non-rationalists, then polyamory is evidence of rationality, regardless of whether it directly causally arises from any rationality-skill. The same would be true if hat-wearing were more common among rationalists than among non-rationalists. It sounds like you’re criticizing a different attitude than is TheOtherDave.
Well, I’m not especially poly myself, but it seems to me rationalists are more likely to look at monogamy and seriously consider the possibility it’s suboptimal.
BTW, there are plenty of monogamists who think it’s immoral for anyone to have a sexual relationship with someone without also committing to not have sex with anyone else, whereas I’d guess there aren’t many poly people who think it’s immoral for other people to have monogamous relationships.
I suspect it depends somewhat on how I phrase the question.
Even in my own American urban poly-friendly subculture, I expect a significant percentage of poly folk would agree that there exist a great many monogamous relationships right now that are immoral, which would not be immoral were they polygamous, because they involve people who ought to be/would be happier if they were/are naturally polygamous. I’m not sure what numbers they’d put around “many”, though. I know several who would put it upwards of 50%, but I don’t know how representative they are.
I therefore suspect that some (but I don’t know how many) of them would, if they were coherent about their understanding of evidence, reluctantly agree that being in a monogamous relationship is evidence of immorality.
But I agree that there are few if any poly folk who would agree (other than as a signaling move) that monogamous relationshjps are definitionally immoral.
That’s pretty silly. The suffering from jealousy and the stress of having to think through all those difficult issues would make polyamory a net loss for many people.
If you wanted to put them down, you might have a case for calling such people weak or stupid for being unable to deal with emotions or think about these issues...or you might say that they are wise, and they are picking their battles and investing those emotional/intellectual resources into things that matter more to them.
Of course, I think you’d be completely justified in calling the belief that polyamory is immoral as a utilitarian net evil.
How many monogamists hold such opinions but not due to religiosity (or the unexamined remnants of former religiosity)?
Well, quite a lot aren’t aware of the existence of polyamory at all. If they think that a person who’s in a sexual relationship with someone would necessarily feel betrayed if they knew that person was also having sex with someone else, they would be likely to consider it immoral even without a religious basis.
Numerically, though, I have no idea.
I dunno—but if you mean “the unexamined remnants of former religiosity” on a societal level¹ rather than on an individual level, then I guess that’s the main reason for the overwhelming majority of such people to hold such opinions. There might also be a few people who know that monogamy can curb the spread of STDs and lack a clear distinction between terminal and instrumental values, and/or (possibly incorrectly²) believe that monogamy is “natural” (i.e. it was the norm in the EEA) and commit the naturalistic fallacy, though.
i.e., a society used to have a memeplex, originating from religion, which included the idea that “one can only (romantically) love one person at a time”; that society has since shed most of that memeplex, but not that particular idea, which is still part of the intersubjective truth—even among individuals who were never religious in the first place.
“Possibly” meaning that I don’t know myself, because I haven’t looked into that yet—not that I’ve seen all the available evidence and concluded it doesn’t definitely point one way or another.
This sounds true, but I’m not sure how it’s relevant to my comment beyond my use of the word “polyamory”.
I think I wanted to show how people who are monogamous usually are because of a cached belief, whereas people who are polyamorous usually are because they’ve thought about both possibilities and concluded one is better.
Then you failed. Consider the following variant of your argument:
“there are plenty of non child molesters who think it’s immoral for any adult to have a sexual relationship with a child, whereas I’d guess there aren’t many child molesters who think it’s immoral for other adults to have relationships exclusively with adults.”
“I think I wanted to show that people who are not child molesters usually are because of a cached belief, whereas people who are child molesters usually are because they’ve thought about both possibilities and concluded one is better.”
That’s distressingly convincing.
Why was that downvoted to −2? Technically that’s correct (though by “show” I didn’t mean ‘rigorously prove’, I meant ‘provide one more piece of evidence’—but yeah, the second paragraph of your comment is evidence for the third, though priors are different in the two cases).
“Let us not speak of them, but look, and pass.”
I don’t think so. The existence of a widespread moral prohibition against some uncommon behavior, which is not matched by a claim of immorality of the typical behavior by those who defend the uncommon behavior, is not evidence that the widespread moral prohibition is a “cached belief” (that is, a meme maintaned only due to tradition and intellectual laziness). People in the majority group could well have pondered the uncommon behavior and decided they had good reason to consider it immoral.
Let A(X) = “There are plenty of non X-ers who think it’s immoral for anyone to X, whereas there aren’t many X-ers who think it’s immoral for other people to refuse to X.”
Let B(X) = “People who are non-X-ers usually are because of a cached belief, whereas people who are X-ers usually are because they’ve thought about both possibilities and concluded one is better.”
Are you really saying that log(P(A(X)|B(X))/P(A(X)|¬B(X))) ≤ 0? or do you just mean that while positive it is very small? Because I really can’t see how A(X) can be more likely given ¬B(X) than given B(X).
¬B(X) is “People who are non-X-ers rarely are because of a cached belief, or people who are X-ers rarely are because they’ve thought about both possibilities and concluded one is better.”
Why do you think that ¬B(X) would make A(X) any less likely than B(X) would?
Ah. Very true.
More likely than the typical person on the street? Sure, agreed. As are contrarians, I’d expect.
Yup. Just like technophiles are more likely to embrace the Singularity.
Yes, for reasons that have already been described, but it’s weak evidence, and other things you know about yourself presumably screen it off.
As of now, there is no evidence that the average LessWronger is more rational than the average smart, educated person (see the LW poll). Therefore, a lot of LWers thinking something is not any stronger evidence for its truth than any other similarly-sized group of smart, educated people thinking it. Therefore, until we get way better at this, I think we should be humble in our certainty estimates, and not do mindhacky things to cement the beliefs we currently hold.
Who said anything about mindhacking? I’m just saying that we should expect rationalists to believe some of the same things, even if nonrationalists generally don’t believe these things. Considering the whole point of this site is to help people become more rational, recognize and overcome their biases etc. I’m not sure what you’re doing here if you don’t think that actually, y’know, happens.
Raemon did. It’s a ritual, deliberately styled after religious rituals, some of the most powerful mindhacks known.
I … didn’t get the impression that this was intended to mindhack people into moving closer to LessWrong consensus.
Oh, sorry, neither did I. I’m not trying to accuse Raemon of deliberate brainwashing. But getting together every year to sings songs about, say, existential risk will make people more likely to disregard evidence showing that X-risk is lower than previously thought. Same for every other subject.
Ah, I guess it was the use of “deliberately” that confused me. Now I come to think of it, this is mentioned as a possible risk in the article, and dismissed as much less powerful than, y’know, talking about it all the damn time.
This probably has something to do with Eliezer profile on OkCupid, and he being the main rationalist.
I’ve never read his OkCupid profile, but I must admit Eliezer’s beliefs are certainly … correlated … with LW group norms. Not perfectly, of course. But he wrote the sequences, which anyone posting here is expected to have read, he founded the site, he’s the highest-status member of this community—when he believes something, people at least consider it.
I thought there was a rule about not breaking tradition, even if the tradition isn’t otherwise supported. No?
The line that people tend to quote there is “מנהג ישראל דין הוא” (the custom of Israel is law), but most people have never looked up its formal definition. Its actual halachic bearing is much too narrow to justify (for example) making kids sing Shabbat meal songs.
“well, death has a purpose sometimes. It helps you see the good things...”
I don’t find this repugnant. Your friend clearly would not kill their grandmother in order to learn a lesson. I think they are simply looking for the silver lining, and looking away from the horror. This is a fair strategy in this case, because the fact is that nobody was able to prevent your grandmother’s death. Being rational means being able to look at the world as it is, but it doesn’t mean you’re never allowed to stop staring at the worst parts.
I don’t find it repugnant and I’m actually fine with people who deal with death that way, in many cases. But it really wasn’t what I needed to hear at that moment.
Ah. So it’s not all about existential risk, it’s also about making existence itself more worthwhile… I’m going to be shameless here and ask directly; do any of you guys work on something where they could use the help of a Junior Engineer in.. well, it’s very generic, we do a bit of a jack-of-all-trades-master-of-none here… Because I’m about to finish uni/college, and the job market is monstrously tight over here, and, well where would be better opportunities to learn proper engineering than under rationalists? I’m especially interested in things related to developing “developing countries” (especially those with governments and societies hostile to freethinking) and things related to sustainable energy and infrastructure (all at the same time would be glorious, but one out of three ain’t bad).
By default, this should be your approach.
I am also a junior engineer. Working a normal job and donating what I can to SI.
Took me a long-ass time to find a job, I know that feel. Keep looking.
I think it may be best to post this to the general discussion (especially if you don’t have a local meetup group where you can be collaborating with people you know more personally). I don’t have a good answer to your question off the bat, but I think it’s an important question hope someone here can help you more specifically.
You mean the Open Thread?
The discussion section is probably better.
Does anybody in your group have children? It doesn’t seem to me that what you have in your ritual book would serve them very well. Even ignoring any possible desire to “recruit” the children themselves, that means that adults who have kids will have an incentive to leave the community.
Maybe it’s just that I personally was raised with zero attendance at anything remotely that structured, but it’s hard for me to imagine kids sitting through all those highly abstract stories, many of which rely on lots of background concepts, and being anything but bored stiff (and probably annoyed). Am I wrong?
Even if they could sit through it happily, there’s the question of whether having them chant things they don’t understand respects their agency or promotes their own growth toward reasoned examination of the world and their beliefs about it. Especially when, as somebody else has mentioned, the ritual includes stuff that’s not just “rationalism”. Could there be more to help them understand how to get to the concepts, so that they could have a reasonable claim not to just be repeating “scripture”?
Or am I just worrying about something unreal?
(shrugs) You’re not wrong, but I’m not sure you’re right either.
In my own case, growing up as an Orthodox Jew involved sitting through lots of highly abstract ritual observances that relied on lots of background concepts (and frequently being bored stiff and annoyed). And if a rationalist group is only as successful at retaining the involvement of parents and their kids as Orthodox Judaism is, dayenu. (Which is to say: that would be sufficient.)
More generally, I suspect that it’s perfectly possible to involve kids in something that structured, it just requires giving the kids roles they can engage with in that structure.
A comment I made in the introduction article:
More generally, each community that wants this will need to customize it for their own needs. Daenerys’ event in Ohio didn’t end up having singing or litanies at all, instead being built around custom vows and affirmations.
Having lost parents and grandparents in the last several years, I appreciate your sentiment. But, as much as I would want to live forever, I am not sure that eternal individual life is good for humanity as a whole, at least without some serious mind hacking first. Many other species, like, say, salmon, have a fixed lifespan, so intelligent salmon would probably not worry about individual immortality. It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without. That said, I agree that suffering and premature death probably has no advantage for the species as a whole and ought to be eliminated, but I cannot decide for sure if fixed lifespan is such a bad idea.
I actually mostly agree with you. Or at least, that the answer is not terribly obvious. I didn’t expound upon it during the ceremony (partly due to time, and partly because one of the most important aspects of the moment was to give a time for anti-deathists to grieve for people they lost, who’s death they were unable to process among peers who shared their beliefs.)
But in the written up version here, I thought it was important that I make my views clear, and included the bit about me not actually being that much of an anti-deathist. I think the current way people die is clearly supoptimal, and once you remove it as an anchor I’m not sure if people should die after 100 years, a thousand years, or longer or at all. But I don’t think it’s as simple an idea as “everybody gets to live forever.”
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
In any case that seems to me to be much more obvious than “we (for some value of ‘we’) decide, for all of humanity, how long everyone gets to live”.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely. And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
Additionally — this is re: shminux’s comment, but is related to the overall point — “Good for humanity as a whole” and “advantage for the species as a whole” seem like nonsensical concepts in this context. Humanity is just the set of all humans. There’s no such thing as a nebulous “good for humanity” that’s somehow divorced from what’s good for any or every individual human.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn’t inherently self-destructive.
I think I agree with the spirit of your answer. “We can’t possibly figure out how to do that and in any case doing so wouldn’t feel right, so we’ll let the people involved sort it out amongst themselves.,” but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you’ll ever be able to really design a system for all humanity.
To you, perhaps. Well, and me. You’re intuitions on the matter are not universal, however. Far from it, as our friends’s comments show.
My main problems (read: ones that don’t rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that’s all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you’d get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I’d end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is “fair and stable”, not “optimal”. I’m not arguing that that’s how things should work — and I can easily imagine ways to change it that I’d view as improvements — but it’s a nice simple model of how things could be stable.
Well, that model may be stable (I haven’t actually thought it through sufficiently to judge, but let’s grant that it is) — but how exactly is it “fair”? I mean, you’re assuming a set of values which is nowhere near universal in humanity, even. I’m really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
Those rules are not, in fact, descriptive of how the universe works (or else we wouldn’t be having this discussion). Do you think they should be?
If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)
For instance, you say that “each living entity would get to have” so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are… a few… moral issues here. Perhaps we’ll just kill people at some age?
What I am getting at is that you can’t just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.
Note: Not trying to attack your position, just curious.
Fixed by whom, might I ask?
You seem to be implying that designed death is worse. How do you figure?
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
I don’t. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
Ah… If you or I knew what to think, we’d be working on CEV right now, and we’d all be much less fucked than we currently are.
A statement like that needs a mathematical proof.
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.
Looks like you and I have different terminal meta-values.
I’m really curious to know what you mean by ‘terminal meta-values’. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
Say, whether it is ever acceptable to adjust someone’s terminal values.
No, I’m perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it’s likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people’s distress when you kill them, in which case remind me never to drink anything you give me.
Typical mind fallacy?
… are you saying I’m foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn’t realize? Yes? Congratulations, you’re not a psychopath.
Everyone who voluntarily joins the military is a psychopath?
Tell you what. Instead of typing out the answer to that, I’m going to respond with a question: how do you* think people who join the military justify the fact that they will probably either kill or aid others in killing?
*(I do have an answer in mind, and I will post it, even if your response refutes it.)
I think they have many different justifications depending on the person, ranging from “it’s a necessary evil” to “I need to pay for college and can hopefully avoid getting into battle” to “only the lives of my own countrymen matter”, just like people can have many different justifications for why they’d approve modifying the terminal values of others.
So, despite the downvotes that bought me …
I said “non-psychopaths consider killing a Bad Thing.”
You said “But what about people who join the army?”
I said “What do you think?”
You said “I think they justify it as saving more lives than it kills, or come up with reasons it’s not really killing people”
I think this conversation is over, don’t you?
Do you see my point that there are plenty of ways by which somebody can consider killing as not-so-bad, without needing to be a psychopath?
No. Something can be bad without being worse than the other options, and people can be mistaken about whether something an action will kill people. This is quite separate from actually having no term for human life in their utility function.
There’s an important difference between “not bad” and “bad but justifiable under some circumstances”. I don’t think believers in abortion, execution or war believe that killing per se is morally neutral. Each of those three has its justification.
I believe abortion is morally neutral, at least for the first few months and probably more.
But I said “killing per se”.
“Neurotypical”… almost as powerful as True!
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he’s actually a psychopath or, indeed, an actual salmon-person (those are still technically “human”, I assume.)
Neurotypical, that’s the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).
If your utility function differs, it is wrong, while theirs is right. (I’d throw in some reference to a divine calibration, but that would be overly sarcastic.)
I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing “neurotypical” is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?
Yes, here’s your last reply to me on just that topic:
Also:
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I just realized I never replied to this. I definitely meant to. Must have accidentally closed the tab before clicking “comment”.
No. I believe they are mostly misinformed regarding animal intelligence and capacity for pain, conditions in slaughterhouses and farms etc.
[Edited as per Vaniver’s comment below]
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Dammit, I was literally about to remove that claim when you posted this :(
Possible outcome; better than most; boring. I don’t think that’s really something to strive for, but my values are not yours, I guess. Also, I’m assuming we’re just taking whether an outcome is desirable into account, not its probability of actually coming about.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I’m just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.
Sorry, read this wrong.
In general: Depends on your terminal values.
In particular: Probably not much of a decision (advanced dementia versus legal death). As you know, in terms of preserving the functional capacity of the original human’s cognition, both lead to the same result, albeit at different speeds. (In addition, even for cryonic purposes, it would be vastly better to conserve a non-degenerated brain.)