Even if we interpret “clamoring in the streets” as a metonym for other forms of mass political action—presumably with the aim of increasing government funding for medical research?
Yes, “clamoring in the streets” is not to be taken too literally here. I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
I don’t think the question of strategicness is is relevant here. For one thing, humans are not automatically strategic. But beyond that, I believe my point stands because most people are not taking any actions based on a belief that aging and death are solvable and it’s terrible that we’re not going as fast as we could be. I maintain this is evidence they are not living in a world (in their minds) where this is a real option. Your friend is an extreme outlier, and you too if your Rust example holds up.
I think the exposition here would be more compelling if you explicitly mention the social pressures in both the pro-Vibrams and anti-Vibrams directions: some people will tease you having “weird” toe-shoes, but some people will think better of you.
It’s true the social pressures exist in both directions. The point of that statement is merely to state that social considerations can be weighed within a causal frame, but they can be traded off against other things which are not social. I don’t think an exhaustive enumeration of the different social pressures helps make that point further.
The phrase “doesn’t involve being so weird” makes me wonder if this is meant as deliberate irony? (“Being weird” is a social-reality concept!) You might want to rewrite this paragraph to clarify your intent.
Yes, that paragraph was written from the mock-perspective of someone inhabiting a social reality frame, not my personal outside-analyzing frame as the OP. I apologize if that wasn’t adequately clear from context.
What evidence do you use to distinguish between people who are playing the “talk about life extension” group game, and people who are actually making progress on making life extension happen in the real, physical universe? (I think this is a very hard problem!)
I agree this is a very hard problem and I have no easy answer. My point here was to say that a person in the social reality frame might not even be able to recognize the existence of people who working on life extension simply because they actually really care about life extension. That their whole assessment remains in the social frame (particularly at the S1 level).
(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I’m trying to say, or destroying my motivation to write anything at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you’re trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)
a belief that aging and death are solvable
But whether aging and death are solvable is an empirical question, right? What if they’re not solvable? Then the belief that aging and death are solvable would be incorrect.
I can pretty easily imagine there being an upper bound on humanly-achievable medical technology. Suppose defeating aging would require advanced molecular nanotechnology, but all human civilizations inevitably destroy themselves shortly after reaching that point. (Say, because that same level of nanotech gives you super-fast computers that make it easy to brute-force unaligned AGI, and AI alignment is just too hard.)
and it’s terrible that we’re not going as fast as we could be.
I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage
I think people do this. In the OP, you linked to the immortal Scott Alexander’s “Who By Very Slow Decay”, which contains this passage—
In the cafeteria at lunch, [doctors] will—despite medical confidentiality laws that totally prohibit this—compare stories of the most ridiculous families. “I have a blind 90 year old patient with stage 4 lung cancer with brain mets and no kidney function, and the family is demanding I enroll her in a clinical trial from Sri Lanka.” “Oh, that’s nothing. I have a patient who can’t walk or speak who’s breathing from a ventilator and has anoxic brain injury, and the family is insisting I try to get him a liver transplant.”
What is harrassing doctors to demand a liver transplant, if it’s not feeling outrage and taking action?
why have we not solved this yet?
In social reality, this is a rhetorical question used to coordinate punishment of those who can be blamed for not solving it yet.
In causal reality, it’s a question with a very straightforward literal answer: the human organism is, in fact, subject to the biological process of senescence, and human civilization has not, in fact, developed the incredibly advanced technology that would be needed to circumvent this.
The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It’s quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don’t ask ‘what would cause people I love to die less often’ at all, which my model says is because that question doesn’t even parse to them.
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
Fwiw, I found this paragraph quite helpful. I initially bounced off your original comment because I couldn’t tell what the point was, and would have had an easier time following it if it had opened with something more like this paragraph.
(Meta: Yup, that’s much better. I appreciate the effort. To share some perspective from my end, I think this has been my most controversial post to date. I think I understand now why many people say posting can be very stressful. I know of one author who removed all their content from LW after finding the comments on their posts too stressful. So there’s a probably a trade off [I also empathize with the desire to express emphatic opinions as you feel them], where writing more directly can end up dissuading many people from posting or commenting at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
I think that’s a reasonable point. My counter is that I’d argue that “transhumanist social reality” is more connected to the causal world than mainstream social reality. Transhumanists, even if they are biased and over-optimistic, etc., at least invoke arguments and evidence from the general physical world: telomeres, nanotechnology, the fact that turtles lives a really long time, experiments on worms, etc. Maybe they repeat each other’s socially sanctioned arguments, but those arguments invoke causal reality.
In contrast, the mainstream social reality appears to be very anchored on the status quo and history to date. You might be able to easily imagine that there’s an upper bound on humanly-achievable medical technology, but I’d wager that’s not the thought process most people go through when (assuming they ever even consider the possibility) they judge whether they think life-extension is possible or not. To quote the Chivers passage again:
“The first thing that pops up, obviously, is I vaguely assume my children will die the way we all do. My grandfather died recently; my parents are in their sixties; I’m almost 37 now. You see the paths of a human’s life each time; all lives follow roughly the same path. They have different toys—iPhones instead of colour TVs instead of whatever—but the fundamental shape of a human’s life is roughly the same.
Note that he’s not making an argument from physics or biology or technology at all. This argument is from comparison to other people. “My children will die the way we all do,” “all lives follow roughly the same path.” One might claim that isn’t unreasonable evidence. The past is a good prior, it’s a good outside view. But the past also shows tremendous advances in technology and medical science—including dramatic increases in lifespan. My claim is that these things aren’t considered in the ontology most people think within, one where how other people do things is dominant.
If I ask my parents, if I stop and ask people on the street, I don’t expect them to say they thought about radical life extension and dismissed it because of arguments about what is technologically realistic. I don’t expect them to say they’re not doing anything towards it (despite it seeming possible) because they see no realistic path for them to help. I expect them to not have thought about it, I expect them to have anchored on what human life has been like to date, or I expect them to have thought about it just long enough to note that it isn’t a commonly-held belief and conclude therefore it’s just a thing another group believes.
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group—and I’d wager that in many cases it’s because on their own they reasoned how humans are now isn’t how they have to be—rightly or wrongly—they invoke a belief about what broader reality allows beyond what is commonly held opinion or practice to date. Maybe that’s a social reality too, but it’s a really different one.
The reason why the disease and death example is confusing to me is partly because I expect people to be highly emotion and unstrategic—willing to invest a great deal for only a small chance. People agonize over “maybe I could have done something” often enough. They demand doctors do things “so long as there’s a chance.” One can doubt that radical life extension is possible, but I don’t think one can be reasonably certain that it isn’t. I expect that if people thought there was any non-trivial chance that we didn’t need to millions of people to decay and die each year, they would be upset about it (especially given first-hand experience), and do something. As it is, I think most people take death and decay for granted. That’s just how it is. That’s what people do. That’s my confusion. How can you so blithely ignore the progress of the last few hundred years? Or the technological feats we continue to pull off. You think it’s reasonable for there to be giant flying metal cans? For us to split the atom and go to moon? To edit genes and have artificial hearts? To have double historical lifespans already? Yet to never wonder whether life could be better still? To never be upset that maybe the universe doesn’t require it to be this way, instead we (humanity) just haven’t got our shit together, and that’s a terrible tragedy.
This perspective is natural to me. Obvious. The question I am trying to explain is why am I different? I think I am the weird one (i.e., the unusual one). But what am I doing differently? How is my reality (social or otherwise) different? And one of the reasonable answers is that I invoke a different type of reasoning to infer what is possible. My evidence is that I don’t encounter people responding with like-kind arguments (or even having considered the question) to questions of elimination decay and death.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group
This isn’t necessarily a point in transhumanism’s favor! At least vertically-transmitted memeplexes (spread from parents to children, like established religions) face selective pressures tying the fitness of the meme to the fitness of the host. (Where evolutionary fitness isn’t necessarily good from a humane perspective, but there are at least bounds on how bad it can be.) Horizontally-transmitted memeplexes (like cults or mass political movements) don’t face this constraint and can optimize for raw marketing appeal independent of long-term consequences.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
I think moral judgements are usually understood to have a social function—if I see someone stealing forty cakes and say that that’s terrible, there’s an implied call-to-action to punish the thief in accordance with the laws of our tribe. It seems weird to expect this as an alternative to social reality.
They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
You expect them to get angry—at whom in particular? - because grandma keeps getting older? For tens of thousands of years of human history, the only alternative to this has been substantially worse for grandma. Unless she wants to die and you’re talking about euthanasia, but no additional medical research is needed for that. There is no precedent or direct empirical evidence that anything else is possible.
Maybe people are wrong for ignoring speculative arguments that anti-aging research is possible, but that’s a terrible example of people being bound by social reality.
1. True, for ten thousands of years of human history, it has been that way. But “there is no precedent or direct empirical evidence that anything else is possible” emphatically does not cut it. Within only a few hundred years the world has been transformed, we have magical god-devices that connect us across the world, we have artificial hearts, we can clean someone’s blood by pumping out of it and then back in, we operate on brains, we put man on the moon. In recent years you’ve got the rise of AI and gene editing. Lifespans are already double most of what they’ve been for most of history. What has held for tens of thousands of years is no longer so. It is not that hard to see that humankind’s mastery over reality is only continuing to grow. Precedent? Maybe not. But reason for hope? Yes. Actually pretty reasonable expectation that our medical science is not maxed out? Definitely.
This isn’t speculative. The scientific and technological progress should be apparent to those who’ve lived more than a few decades in the recent history.
2. Anger doesn’t always have to have a target. But if you need one then pick society, pick science, pick research, pick doctors, pick your neighbours.
3. Watching your loved ones decay and die is anguish. If people are going to yell at the doctors that they should do something, that something must be possible (though some would argue this is fake/performance), then let them also yell at state of the world. That this unnecessary circumstance has come to be. Yell at the universe.
4. The alternative explanation to saying that people see the world overwhelmingly via social reality is that people simply have terrible causal models. Perhaps to me the scientific/technological progress of the last few hundred years is obviously, obviously reason to believe far more is possible (and better today than in fifty years), but not to others. Perhaps I’m wrong about it, though I don’t think I am.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough. If you demand that doctors do things which only might prolong grandma’s life, then why not ask that have better science because there’s chance for that working too.
Perhaps people really didn’t get enough of an education to appreciate science and technology (that we manipulate light itself to communicate near instantaneously sparks no wonder and awe, for example). So then I’d say they are overly anchored on the status quo. It is not so much being bound by social reality, but by how things are now, without extrapolation even fifty years forward or back—even when they themselves have lived through so much change.
5. I pick the example of disease and death because is so personal, so immediate, so painful for many. It doesn’t require that we posit any altruistic motivation and it’s a situation where I expect to see a lot of powerful emotion revealing how people relate to reality (rather than them taking the options they think are immediately available to them and strategic).
I don’t think the disagreement here is about the feasibility of life extension. (I agree that it looks feasible.) I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame. “People should be angrier about aging” might be a good thesis for a blog post, but I think it would work better as a different post.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough.
I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame.
I wonder if this is a point where I being misunderstood. Based on this and a few in-person conversations, people think I’m taking a normative stance here. I’m not. Not primarily. I am trying to understand a thing I am confused about and to explain my observations. I observe that my models lead me to expect that people would be doing X, but I do not observe that—so what am I missing?
Fore the record, for all those reading:
This post isn’t trying to tell anyone to do anything, and I’m not actively stating a judgment. I haven’t thought about what people should be doing. I’m not saying they should be clamoring in the streets. There is no active admonishing directed at anyone here. There is no thesis. I haven’t thought about what people should be doing enough—I haven’t thought through what would actually be strategic for them. So I don’t know. Not with any confidence, not enough to tell them what to do.
Given this is about my confusion about what I expect people to do and that I don’t expect people to be strategic, the question of whether or not doing X would be strategic isn’t really relevant. My model doesn’t predict people to be strategic, so the fact that strategic action might not to be do X doesn’t make me less confused.
(A valid counter to my confusion is saying that people are in fact strategic, but I’m rather incredulous. I’m not sure if you or Benquo were saying that?)
I am a bit confused, I might not be reading you carefully enough, but it feels here like you’re trying to explain people’s behavior with reference to normative behavior rather than descriptive (in this comment and earlier ones).
It’s precisely because I expect most people to think “but there’s still a chance right” that I would expect the possibility of life extension to motivate to action—more so than if they cared about the magnitude. (Also, caring about magnitude is a causal reality thing, I would say as the notion of probabilities is, seemingly.)
Your argument doesn’t make sense unless whatever “clamoring in the streets” stands in for metaphorically is an available action to the people you’re referring to. It seems to me like the vast majority of people are neither in an epistemic position where they can reasonably think that they know that there’s a good chance of curing aging, nor do they have any idea how to go about causing the relevant research to happen.
They do know how to increase the salience of “boo death,” but so far in the best case that seems to result in pyramids, which don’t work and never could, and even then only for the richest.
Note that even for those of us who strive for legibility of action (“live in the causal world”), it’s not clear that aging and death CAN be solved in humans at all, and seems downright unlikely that any strategy or action can solve it fast enough to avoid the pain and fear of the death of my loved ones and myself.
Whether a loved one dies at 65, 85, 105, 205, or 1005, it’s going to suck when it happens. No amount of clamoring in the streets (or directed research into biology) is going to avoid that pain. Some amount of effort and sacrifice toward life extension _CAN_ have positive average and top-percentile lifespans, and that’s great if it applies to the people I care most about. And much research and behavior change is useful in improving the quality of the limited years of many people. Note that “quality” includes other people’s acceptance and support, so mixes social reality in with the definition.
It remains really unclear to me whether I should prefer that strangers live longer or that there are more strangers born to replace the dead ones. My intuition and initial preference is that fewer/longer is better than more/shorter lives, but I don’t have much rational justification for that, and with my current evidence for stagnation of beliefs and reduction in interest as people age, I suspect I may actually prefer more/shorter. I’m not sure how much of more/longer is possible as long as we’re limited to the current earth ecosystem.
Oops, went too far on the object level, sorry—my point is that there are many reasons someone might not spend much effort on eradicating aging, and “they live in social reality and don’t consider causal reality” is a very weak strawman for their choices.
They get progressively more theoretical as distance increases. It seems l care about my n-degrees-removed cousin (in the present or future) who I haven’t met and know no specifics about, about as much as any n-degrees-connected stranger. Note that I have no theory or considered belief that I _SHOULD_ care about some strangers or distant relatives more than others, this is pure introspection on what I seem to actually feel.
Yes, “clamoring in the streets” is not to be taken too literally here. I mean that it is something people have strong feelings about, something that they push for in whatever way. They seen grandma getting sicker and sicker, suffering more and more, and they feel outrage: why have we not solved this yet?
I don’t think the question of strategicness is is relevant here. For one thing, humans are not automatically strategic. But beyond that, I believe my point stands because most people are not taking any actions based on a belief that aging and death are solvable and it’s terrible that we’re not going as fast as we could be. I maintain this is evidence they are not living in a world (in their minds) where this is a real option. Your friend is an extreme outlier, and you too if your Rust example holds up.
It’s true the social pressures exist in both directions. The point of that statement is merely to state that social considerations can be weighed within a causal frame, but they can be traded off against other things which are not social. I don’t think an exhaustive enumeration of the different social pressures helps make that point further.
Yes, that paragraph was written from the mock-perspective of someone inhabiting a social reality frame, not my personal outside-analyzing frame as the OP. I apologize if that wasn’t adequately clear from context.
I agree this is a very hard problem and I have no easy answer. My point here was to say that a person in the social reality frame might not even be able to recognize the existence of people who working on life extension simply because they actually really care about life extension. That their whole assessment remains in the social frame (particularly at the S1 level).
(Meta: is this still too combative, or am I OK? Unfortunately, I fear there is only so much I know how to hold back on my natural writing style without at least one of either compromising the information content of what I’m trying to say, or destroying my motivation to write anything at all.)
Perhaps the crux is this: the example (of attitudes towards death) that you seem to be presenting as a contrast between a causal-reality worldview vs. a social-reality worldview, I’m instead interpeting as a contrast between between transhumanist social reality vs. “normie” social reality.
(This is probably also why I thought it would be helpful to mention pro-Vibrams social pressure: not to exhaustively enumerate all possible social pressures, but to credibly signal that you’re trying to make an intellectually substantive point, rather than just cheering for the smart/nonconformist/anti-death ingroup at the expense of the dumb/conformist/death-accommodationist outgroup.)
But whether aging and death are solvable is an empirical question, right? What if they’re not solvable? Then the belief that aging and death are solvable would be incorrect.
I can pretty easily imagine there being an upper bound on humanly-achievable medical technology. Suppose defeating aging would require advanced molecular nanotechnology, but all human civilizations inevitably destroy themselves shortly after reaching that point. (Say, because that same level of nanotech gives you super-fast computers that make it easy to brute-force unaligned AGI, and AI alignment is just too hard.)
The concept of “terrible” doesn’t exist in causal reality. (How does something being “terrible” pay rent in anticipated experiences?)
I think people do this. In the OP, you linked to the immortal Scott Alexander’s “Who By Very Slow Decay”, which contains this passage—
What is harrassing doctors to demand a liver transplant, if it’s not feeling outrage and taking action?
In social reality, this is a rhetorical question used to coordinate punishment of those who can be blamed for not solving it yet.
In causal reality, it’s a question with a very straightforward literal answer: the human organism is, in fact, subject to the biological process of senescence, and human civilization has not, in fact, developed the incredibly advanced technology that would be needed to circumvent this.
The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It’s quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don’t ask ‘what would cause people I love to die less often’ at all, which my model says is because that question doesn’t even parse to them.
Fwiw, I found this paragraph quite helpful. I initially bounced off your original comment because I couldn’t tell what the point was, and would have had an easier time following it if it had opened with something more like this paragraph.
(Meta: Yup, that’s much better. I appreciate the effort. To share some perspective from my end, I think this has been my most controversial post to date. I think I understand now why many people say posting can be very stressful. I know of one author who removed all their content from LW after finding the comments on their posts too stressful. So there’s a probably a trade off [I also empathize with the desire to express emphatic opinions as you feel them], where writing more directly can end up dissuading many people from posting or commenting at all.)
I think that’s a reasonable point. My counter is that I’d argue that “transhumanist social reality” is more connected to the causal world than mainstream social reality. Transhumanists, even if they are biased and over-optimistic, etc., at least invoke arguments and evidence from the general physical world: telomeres, nanotechnology, the fact that turtles lives a really long time, experiments on worms, etc. Maybe they repeat each other’s socially sanctioned arguments, but those arguments invoke causal reality.
In contrast, the mainstream social reality appears to be very anchored on the status quo and history to date. You might be able to easily imagine that there’s an upper bound on humanly-achievable medical technology, but I’d wager that’s not the thought process most people go through when (assuming they ever even consider the possibility) they judge whether they think life-extension is possible or not. To quote the Chivers passage again:
Note that he’s not making an argument from physics or biology or technology at all. This argument is from comparison to other people. “My children will die the way we all do,” “all lives follow roughly the same path.” One might claim that isn’t unreasonable evidence. The past is a good prior, it’s a good outside view. But the past also shows tremendous advances in technology and medical science—including dramatic increases in lifespan. My claim is that these things aren’t considered in the ontology most people think within, one where how other people do things is dominant.
If I ask my parents, if I stop and ask people on the street, I don’t expect them to say they thought about radical life extension and dismissed it because of arguments about what is technologically realistic. I don’t expect them to say they’re not doing anything towards it (despite it seeming possible) because they see no realistic path for them to help. I expect them to not have thought about it, I expect them to have anchored on what human life has been like to date, or I expect them to have thought about it just long enough to note that it isn’t a commonly-held belief and conclude therefore it’s just a thing another group believes.
Even if the contrast is “transhumanist social reality”, I ask how did that social reality come to be and how did people join it? I’m pretty sure most transhumanists weren’t born to transhumanist families, educated in transhumanist schools, or surrounded at transhumanist friends. Something at some point prompted them to join this new social group—and I’d wager that in many cases it’s because on their own they reasoned how humans are now isn’t how they have to be—rightly or wrongly—they invoke a belief about what broader reality allows beyond what is commonly held opinion or practice to date. Maybe that’s a social reality too, but it’s a really different one.
The reason why the disease and death example is confusing to me is partly because I expect people to be highly emotion and unstrategic—willing to invest a great deal for only a small chance. People agonize over “maybe I could have done something” often enough. They demand doctors do things “so long as there’s a chance.” One can doubt that radical life extension is possible, but I don’t think one can be reasonably certain that it isn’t. I expect that if people thought there was any non-trivial chance that we didn’t need to millions of people to decay and die each year, they would be upset about it (especially given first-hand experience), and do something. As it is, I think most people take death and decay for granted. That’s just how it is. That’s what people do. That’s my confusion. How can you so blithely ignore the progress of the last few hundred years? Or the technological feats we continue to pull off. You think it’s reasonable for there to be giant flying metal cans? For us to split the atom and go to moon? To edit genes and have artificial hearts? To have double historical lifespans already? Yet to never wonder whether life could be better still? To never be upset that maybe the universe doesn’t require it to be this way, instead we (humanity) just haven’t got our shit together, and that’s a terrible tragedy.
This perspective is natural to me. Obvious. The question I am trying to explain is why am I different? I think I am the weird one (i.e., the unusual one). But what am I doing differently? How is my reality (social or otherwise) different? And one of the reasonable answers is that I invoke a different type of reasoning to infer what is possible. My evidence is that I don’t encounter people responding with like-kind arguments (or even having considered the question) to questions of elimination decay and death.
“Terrible” is a moral judgment. The anticipated experience is that when I point my “moral evaluator unit” at a morally terrible thing, it outputs “terrible.”
This isn’t necessarily a point in transhumanism’s favor! At least vertically-transmitted memeplexes (spread from parents to children, like established religions) face selective pressures tying the fitness of the meme to the fitness of the host. (Where evolutionary fitness isn’t necessarily good from a humane perspective, but there are at least bounds on how bad it can be.) Horizontally-transmitted memeplexes (like cults or mass political movements) don’t face this constraint and can optimize for raw marketing appeal independent of long-term consequences.
Isn’t this kind of circular? Compare: “A Vice President is anyone who’s job title is vice-president. That’s a falsifiable prediction because it constrains your anticipations of what you’ll see on their business card.” It’s true, but one is left with the sense that some important part of the explanation is being left out. What is the moral evaluator unit for?
I think moral judgements are usually understood to have a social function—if I see someone stealing forty cakes and say that that’s terrible, there’s an implied call-to-action to punish the thief in accordance with the laws of our tribe. It seems weird to expect this as an alternative to social reality.
You expect them to get angry—at whom in particular? - because grandma keeps getting older? For tens of thousands of years of human history, the only alternative to this has been substantially worse for grandma. Unless she wants to die and you’re talking about euthanasia, but no additional medical research is needed for that. There is no precedent or direct empirical evidence that anything else is possible.
Maybe people are wrong for ignoring speculative arguments that anti-aging research is possible, but that’s a terrible example of people being bound by social reality.
1. True, for ten thousands of years of human history, it has been that way. But “there is no precedent or direct empirical evidence that anything else is possible” emphatically does not cut it. Within only a few hundred years the world has been transformed, we have magical god-devices that connect us across the world, we have artificial hearts, we can clean someone’s blood by pumping out of it and then back in, we operate on brains, we put man on the moon. In recent years you’ve got the rise of AI and gene editing. Lifespans are already double most of what they’ve been for most of history. What has held for tens of thousands of years is no longer so. It is not that hard to see that humankind’s mastery over reality is only continuing to grow. Precedent? Maybe not. But reason for hope? Yes. Actually pretty reasonable expectation that our medical science is not maxed out? Definitely.
This isn’t speculative. The scientific and technological progress should be apparent to those who’ve lived more than a few decades in the recent history.
2. Anger doesn’t always have to have a target. But if you need one then pick society, pick science, pick research, pick doctors, pick your neighbours.
3. Watching your loved ones decay and die is anguish. If people are going to yell at the doctors that they should do something, that something must be possible (though some would argue this is fake/performance), then let them also yell at state of the world. That this unnecessary circumstance has come to be. Yell at the universe.
4. The alternative explanation to saying that people see the world overwhelmingly via social reality is that people simply have terrible causal models. Perhaps to me the scientific/technological progress of the last few hundred years is obviously, obviously reason to believe far more is possible (and better today than in fifty years), but not to others. Perhaps I’m wrong about it, though I don’t think I am.
And you needn’t be absolutely certain that curing death and aging is possible to demand we try. A chance should be enough. If you demand that doctors do things which only might prolong grandma’s life, then why not ask that have better science because there’s chance for that working too.
Perhaps people really didn’t get enough of an education to appreciate science and technology (that we manipulate light itself to communicate near instantaneously sparks no wonder and awe, for example). So then I’d say they are overly anchored on the status quo. It is not so much being bound by social reality, but by how things are now, without extrapolation even fifty years forward or back—even when they themselves have lived through so much change.
5. I pick the example of disease and death because is so personal, so immediate, so painful for many. It doesn’t require that we posit any altruistic motivation and it’s a situation where I expect to see a lot of powerful emotion revealing how people relate to reality (rather than them taking the options they think are immediately available to them and strategic).
I don’t think the disagreement here is about the feasibility of life extension. (I agree that it looks feasible.) I think the point that Benquo and I have been separately trying to make is that admonishing people to be angry independently of their anger having some specific causal effect on a specific target, doesn’t make sense in the context of trying to explain the “causal reality vs. social reality” frame. “People should be angrier about aging” might be a good thesis for a blog post, but I think it would work better as a different post.
The magnitude of the chance matters! Have you read the Overly Convenient Excuses Sequence? I think Yudkowsky explained this well in the post “But There’s Still a Chance, Right?”.
I wonder if this is a point where I being misunderstood. Based on this and a few in-person conversations, people think I’m taking a normative stance here. I’m not. Not primarily. I am trying to understand a thing I am confused about and to explain my observations. I observe that my models lead me to expect that people would be doing X, but I do not observe that—so what am I missing?
Fore the record, for all those reading:
This post isn’t trying to tell anyone to do anything, and I’m not actively stating a judgment. I haven’t thought about what people should be doing. I’m not saying they should be clamoring in the streets. There is no active admonishing directed at anyone here. There is no thesis. I haven’t thought about what people should be doing enough—I haven’t thought through what would actually be strategic for them. So I don’t know. Not with any confidence, not enough to tell them what to do.
Given this is about my confusion about what I expect people to do and that I don’t expect people to be strategic, the question of whether or not doing X would be strategic isn’t really relevant. My model doesn’t predict people to be strategic, so the fact that strategic action might not to be do X doesn’t make me less confused.
(A valid counter to my confusion is saying that people are in fact strategic, but I’m rather incredulous. I’m not sure if you or Benquo were saying that?)
I am a bit confused, I might not be reading you carefully enough, but it feels here like you’re trying to explain people’s behavior with reference to normative behavior rather than descriptive (in this comment and earlier ones).
It’s precisely because I expect most people to think “but there’s still a chance right” that I would expect the possibility of life extension to motivate to action—more so than if they cared about the magnitude. (Also, caring about magnitude is a causal reality thing, I would say as the notion of probabilities is, seemingly.)
Your argument doesn’t make sense unless whatever “clamoring in the streets” stands in for metaphorically is an available action to the people you’re referring to. It seems to me like the vast majority of people are neither in an epistemic position where they can reasonably think that they know that there’s a good chance of curing aging, nor do they have any idea how to go about causing the relevant research to happen.
They do know how to increase the salience of “boo death,” but so far in the best case that seems to result in pyramids, which don’t work and never could, and even then only for the richest.
Note that even for those of us who strive for legibility of action (“live in the causal world”), it’s not clear that aging and death CAN be solved in humans at all, and seems downright unlikely that any strategy or action can solve it fast enough to avoid the pain and fear of the death of my loved ones and myself.
Whether a loved one dies at 65, 85, 105, 205, or 1005, it’s going to suck when it happens. No amount of clamoring in the streets (or directed research into biology) is going to avoid that pain. Some amount of effort and sacrifice toward life extension _CAN_ have positive average and top-percentile lifespans, and that’s great if it applies to the people I care most about. And much research and behavior change is useful in improving the quality of the limited years of many people. Note that “quality” includes other people’s acceptance and support, so mixes social reality in with the definition.
It remains really unclear to me whether I should prefer that strangers live longer or that there are more strangers born to replace the dead ones. My intuition and initial preference is that fewer/longer is better than more/shorter lives, but I don’t have much rational justification for that, and with my current evidence for stagnation of beliefs and reduction in interest as people age, I suspect I may actually prefer more/shorter. I’m not sure how much of more/longer is possible as long as we’re limited to the current earth ecosystem.
Oops, went too far on the object level, sorry—my point is that there are many reasons someone might not spend much effort on eradicating aging, and “they live in social reality and don’t consider causal reality” is a very weak strawman for their choices.
What about descendants of you/your loved ones?
They get progressively more theoretical as distance increases. It seems l care about my n-degrees-removed cousin (in the present or future) who I haven’t met and know no specifics about, about as much as any n-degrees-connected stranger. Note that I have no theory or considered belief that I _SHOULD_ care about some strangers or distant relatives more than others, this is pure introspection on what I seem to actually feel.