It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there’s no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.
Of course, this is not a new argument, and there are a few standard responses to it. The first one is that I don’t actually value my life equally to that of everyone else’s life, and that it’s inconsistent to appeal to that when I don’t appeal to it in my life in general. And it’s certainly true that I do actually value my own life more than I value the life of a random stranger, but I do that because I’m human and can’t avoid it, not because my values would endorse that as a maximally broad rule. If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I’ll take it.
The other standard argument is that cryonics doesn’t need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it requires that I’m interested enough in cryonics that I get enough fuzzy points from buying cryonics to make up whatever I lose in exchange. And it feels like once you take the leisure budget route, you’re implicitly admitting that this is about purchasing fuzzies, not utilons, which makes it a little odd to apply to all those elaborate calculations which are often made with a strong tone of moral obligation. If one is going to be a utilitarian and use the strong tone of moral obligation, one doesn’t get to use it to make the argument that one should invest a lot of money on saving just a single person, and with highly uncertain odds at that.
By going with the leisure budget argument, one is essentially admitting that cryonics isn’t about altruism, it’s about yourself. And of course, there is nothing wrong with that, since none of us is a 100% complete altruist who cares nothing about themselves, nor should we even try to idealize that kind of a person. And I’m not saying that there’s anything wrong with signing up for cryonics—everyone gets to use their fuzzies budget the way they prefer, and if cryonices gives you the most fuzzies, cool. But if one doesn’t get major fuzzies out of cryo, then that ought to be considered just as reasonable as well.
I’ve had thoughts along similar lines. But it seems like there’s a “be consistent about your selfishness” principle at work here. In particular, if...
...you are generally willing to spend $X / month for something that has a significant chance of bringing you a very large benefit, like saving your life...
...where $X /month is the cost of being signed up for cryonics (organization membership + life insurance)...
… and you think cryonics has a significant chance of working...
It seems kind of inconsistent to not be signed up for cryonics.
(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of “me”.)
Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer ‘2 + 2 = 5’ on a test or sometimes giving ‘2 + 2 = 5’ and other times ′ 2 + 2 = 4′, the latter is probably preferable. Kaj’s argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.
If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it’s certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.
Kaj’s argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.
While I don’t think that there’s anything wrong with preferring to be consistent about one’s selfishness, I think it’s just that: a preference.
The common argument seems to be that you should be consistent about your preferences because that way you’ll maximize your expected utility. But that’s tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I’ve written a partial post about this, which I can try to finish if people are interested.)
For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I’m not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as “preferences”, while the egoist faction is more motivated by just doing whatever feels good at the moment and isn’t that interested in the long-term consequences.
Another way of putting this: If you divide the things you do between “selfish” and “altruistic” things, then it seems to make sense to sign up for cryonics as an efficient part of the “selfish” component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between “Near mode decisions” and “Far mode decisions”. Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.
A huge number of people save money for a retirement that won’t start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.
That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn’t, then I suspect many of those who currently do it wouldn’t do it for similar reasons to those why they don’t sign up for cryonics.
I suspect most people’s reasons for not signing up for cryonics amount to “I don’t think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal’s Mugging.” I don’t see how that would apply to retirement—would people in such a society seriously think they have only a very small chance of surviving until retirement age?
(Disclaimer: I absolutely promise that I am not evil.)
The first one is that I don’t actually value my life equally to that of everyone else’s life, and that it’s inconsistent to appeal to that when I don’t appeal to it in my life in general.
Question: why the hell not? My brain processed this kind of question for the first time around fourth grade, when wanting special privileges to go on a field trip with the other kids despite having gotten in trouble. The answer I came up with then is the one I still use now: “why me? Because of Kant’s Categorical Imperative” (that is, I didn’t want to live in a world where nobody went on the field trip, therefore I should get to go on it—though this wasn’t exactly clear thinking regarding the problem I really had at the time!). I would not want to live in a world where everyone kept their own and everyone else’s lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!
I also do give to charity, do pay my taxes, and do support state-run social-welfare programs. So I’m not advocating total selfishness. I’m just proposing a heuristic: before advocating a certain level of altruism, check whether you’re ok with that level of altruism becoming a Categorical Imperative, such that the Altruism Fairy brainwashes everyone into that level precisely.
In which case, yes, one should value one’s own life over charity levels. After all, it’s exactly what the charity recipients will do!
I would not want to live in a world where everyone kept their own and everyone else’s lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!
I think that the argument you’re going for here (though I’m not entirely sure, so do correct me if I’m misinterpeting you) is “if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic”.
And I agree! I think every altruist should take care of themselves first—for various reasons, including the one you mentioned, and also the fact that miserable people aren’t usually very effective at helping others, and because you can inspire more people to become altruistic if they see that it’s possible to have an awesome time while being an altruist.
But of course, “I should invest in myself because having an awesome life lets me help me others more effectively” is still completely compatible with the claim of “I shouldn’t place more intrinsic value on others than in myself”. It just means you’re not being short-sighted about it.
I think that the argument you’re going for here (though I’m not entirely sure, so do correct me if I’m misinterpeting you) is “if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic”.
More like, “If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn’t follow it in the first place.”
Or, put simply, “Your supposed all-altruism is self-contradictory in the limit.” Hence my having to put a disclaimer saying I’m not evil, since that’s one of the most evil-villain-y statements I’ve ever made.
Of course, there are complications. For one thing, most people don’t have the self-destructive messiah complex necessary for total altruism, so you can’t apply first-level superrationality (ie: the Categorical Imperative) as including everyone. What I do endorse doing is acting with a high-enough level of altruism to make up for the people who don’t act with any altruism while also engaging in some delta of actual non-superrational altruism.
How to figure out what level of altruistic action that implies, I have no idea. But I think it’s better to be honest about the logically necessary level of selfishness than to pretend you’re being totally altruistic but rationalize reasons to take care of yourself anyway.
“If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn’t follow it in the first place.”
Sorry, I don’t follow. If the logical result of accepting full misery to oneself would be everyone being miserable, why wouldn’t the altruists just reason this out and not accept full misery to themselves? “Valuing everyone the same as yourself” doesn’t mean you’d have to let others treat you any way they like, it just means you’d in principle be ready for it, if it was necessary.
(I think we’re just debating semantics rather than disagreeing now, do you agree?)
I think we have slightly different values, but are coming to identical practical conclusions, so we’re agreeing violently.
EDIT: Besides, I totally get warm fuzzies from being nice to people, so it’s not like I don’t have a “selfish” motivation towards a higher level of altruism, anyway. SWEAR I’M NOT EVIL.
Serious answer: Even if I don’t endorse it, I do feel a pang of guilt/envy/low-status at being less than 100% a self-impoverishing Effective Altruist, which has been coming out as an urge to declare myself not-evil, even by comparison.
Okay, in that case you should stop feeling those negative emotions right now. :) Nobody here is a 100% self-impoverishing EA, and we ended up agreeing that it wouldn’t even be a useful goal to have, so go indulge yourself in something not-at-all-useful-nor-altruistic and do feel good about it. :)
“if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable,”
How confident of this are we?
I mean, there are many tasks which can lead to my happiness. If I perform a large subset of those tasks for my own benefit, they lead to a certain happiness-level for me… call that H1. If I perform a small subset of those tasks for everyone’s benefit, they lead to a different happiness-level, H2, for everyone including me. H2 is, of course, much lower than H1… in fact, H2 is indistinguishable from zero, really, unless I’m some kind of superstar. (I’m not aggregating across people, here, I’m just measuring how happy I am personally.)
So far, so good.
But if everyone else is also performing a small subset of those tasks for everyone’s benefit, then my happiness is N*H2. H2 is negligible, but N is large. Is (N*H2) > H1?
I really have no idea. On the face of it, it seems implausible. On the other hand, comparative advantage is a powerful force. We’ve discovered that when it comes to producing goods and services, for example, having one person performing a single task for everyone does much better than having everyone do everything for themselves.
Perhaps the same is true for producing happiness?
Which is not necessarily an argument for altruism in the real world, but in this hypothetical world where everyone acts with maximal altruism, maybe the end result is everyone is having a much more awesome life… they’re simply having it thanks to the efforts of a huge community, rather than entirely due to their own efforts.
Then again, that sounds like a pretty good description of the real world I live in, also.
It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world.
I think this is why it feels squicky trying to assign a monetary value to my life; part of me thinks it’s selfish to assign any more value to my life than Givewell’s stated cost to save a stranger’s life ($1700-ish??) But I know I value it more than that. I wouldn’t risk my life for a paycheck.
I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident. And the risk is very likely less than 1/1000 given the years that I’ve been biking accident-free, so there’s a multiplication there.
I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident.
I rather suspect it depends primarily on where you bike. Biking through streets of Manhattan has different risk than biking on rural Wyoming roads.
I seem to remember the answer being that cycling is more dangerous per mile than driving, but that the increase in physical fitness more than compensates in all-cause mortality terms. The first paper I found seems to point to the same conclusion.
I don’t know how that would be adjusted in someone that already has fitness habits. It probably also depends on how well developed the cycling infrastructure in your town is, but I’ve never seen any actual data on that either.
In my experience bicycling is much safer. I have been cycling more or less everyday since I was at least since I was 8. and have never been in a life-threatening accident. however, while traveling by car, I have been in 2 or 3 potential life threatening crashes. But this will be very dependent of location culture and personal variables.
If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?
Depends: has the treatment been proven to work before?
(Yes, I’ve heard the probability calculations. I don’t make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)
Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions.
(I think. People who know what they’re talking about, feel free to correct me)
You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities.
Now, given that cryonics has not been seen to work on, say, rats, I don’t see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me.
Of course, then there’s the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they’re doing with you. Generally, every answer I’ve heard to that question made my skin crawl.
I bet you would. Lots of jobs have components (such as extra stress, less physical activity, or living in a dangerous or dirty city) that reduce life expediency. Unless you pick the job which maximizes your life span, you would effectively be risking your life for a paycheck. Tradeoffs are impossible to escape, even if you don’t explicitly think about them.
In context, it seems uncharitable to read “risk my life” to include any risk small enough that taking it would still be consistent with valuing one’s own life far above $1700.
Remember, your life has instrumental value others don’t; if you risk your life for a paycheck, you’re risking all future paychecks as well as your own life-value. The same applies to stressing yourself out obsessively working multiple jobs, robbing banks, selling your redundant organs … even simply attempting to spend all your money on charity and the cheapest of foods tends too be a fairly bad suggestion for the average human (although if you think you can pull it off, great!)
The other standard argument is that cryonics doesn’t need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it requires that I’m interested enough in cryonics that I get enough fuzzy points from buying cryonics to make up whatever I lose in exchange. And it feels like once you take the leisure budget route, you’re implicitly admitting that this is about purchasing fuzzies, not utilons, which makes it a little odd to apply to all those elaborate calculations which are often made with a strong tone of moral obligation. If one is going to be a utilitarian and use the strong tone of moral obligation, one doesn’t get to use it to make the argument that one should invest a lot of money on saving just a single person, and with highly uncertain odds at that.
I imagine that a lot of people on Less Wrong get off on having someone tell them “with a strong tone of moral obligation” that death can be defeated and that they simply must invest their money in securing their own immortality. Even if it isn’t a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they’ll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.
Even if it isn’t a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they’ll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.
Just asking, were you trying to make that sound awful and smug? Because that honestly sounds like a future I don’t want to wake up in.
I want to wake up in the future where people have genuine compassion for the past, and are happy to welcome the “formerly dead” to a grand new life, hopefully even including their friends and loved ones who also made it successfully to “the Future”. If the post-cryonic psychological counsellors of the future woke me up with, “Congratulations, you made the right business decision!”, then I would infer that things had gone horribly wrong.
Lost in the wilderness, I think we should go North; you, South. If I find help, but learn that you died, my first thought will not be “neener neener told you so”.
Is is possible cryonic wakers might be treated very poorly? Perhaps stigmatized?
I’m very ignorant of what all is involved in either “end” of cryonics, but what if, say, the cost of resurrecting the frozen person is prohibitively high and future people lobby to stop their waking up? And even the ones who do wake up are treated like pariahs?
It might play out like the immigaration situation in the US: A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants.
I can already hear the arguments now…
“They won’t know if we don’t wake them up. Besides every one we wake costs us X resources which damages Y lives by Z%.”
A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants.
How is that any different from saying “a nation, founded by slaveowners, that is now composed of a big chunk of citizens who hate slaveowners”? Certainly the fact that your ancestors benefited from being slaveowners is no reason why you should support slaveowners now.
Just asking, were you trying to make that sound awful and smug?
Yep.
While genuine compassion is probably the ideal emotion for a post-cryonic counselor to actually show, it’s the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now. There’s nothing wrong with that: “Everyone who said I was stupid is wrong and gets forced to admit it.” is probably one of the top ten most common fantasies and there’s nothing wrong with spending your leisure budget on indulging a fantasy. Especially if it has real world benefits too.
it’s the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now.
That’s… actually kinda sad, and I think I’m going to go feed my brain some warm fuzzies to counter it.
Trying to live forever out of spite instead of living well in the here and now that’s available? Silly humans.
Don’t worry, poiuyt is making all of this up. I don’t personally know of anyone to whom this imaginary scenario applies. The most common sentiment about cryonics is “God dammit I have to stop procrastinating”, hence the enjoinders are welcome; as for their origin point, well, have you read HPMOR up to Ch. 96?
I feel that I am being misunderstood: I do not suggest that people sign up for cryonics out of spite. I imagine that almost everyone signed up for cryonics does so because they actually believe it will work. That is as it should be.
I am only pointing out that being told that I am stupid for signing up for cryonics is disheartening. Even if it is not a rational argument against cryonics, the disapproval of others still affects me. I know this because my friends and family make it a point to regularly inform me of the fact that cryonics is “a cult”, that I am being “scammed out of my money” by Alcor and that even if it did work, I am “evil and wrong” for wanting it. Being told those things fills me with doubts and saps my willpower. Hearing someone on the pro-cryonics side of things reminding me of my reasons for signing up is reassuring. It restores the willpower I lose hearing those around me insulting my belief. Hearing that cryonics is good and I am good for signing up isn’t evidence that cryonics will work. Hearing that non-cryonicists will “regret” their choice certainly isn’t evidence that cryonics is the most effective way to save lives. But it is what I need to hear in order to not cave in to peer pressure and cancel my policy.
I get my beliefs from the evidence, but I’ll take my motivation from wherever I can find it.
Eliezer, I have been a frequent and enthusiastic participant on /r/hpmor for years before I decided to buck up and make a LessWrong account.
The most common sentiment about cryonics is “God dammit I have to stop procrastinating”,
I don’t recall someone answering my question in the other place I posted it, so I might as well ask you (since you would know): provided I am unwilling to believe current cryonic techniques actually work (even given a Friendly superintelligence that wants to bring people back), where can I be putting money towards other means of preserving people or life-extension in general?
Gwern had a posting once on something called “brain plastination”, which supposedly works “better” in some sense than freezing in liquid nitrogen, even though that still relies on em’ing you to bring you back, which frankly I find frightening as all hell. Is there active research into that? Into improved cryonics techniques?
Or should I just donate to anti-aging research on grounds that keeping people alive and healthy for longer before they die is a safer bet than, you know, finding ways to preserve the dead such that they can be brought back to life later?
There’s good and bad spite. Good spite is something like, “They call me mad! But I was right all along. Muahahaha!” and feeling proud and happy that you made the right choice despite opposition from others. Bad spite is something like, “I was right and they were wrong, and now they’re suffering for their mistakes. Serves them right”. One is accomplishment, the other is schadenfreude.
Yes, it is a great psychological coping mechanism. Death is such a deeply personal topic that it would be folly to assume fuzzies, or the avoidance of frighties, didn’t factor in.
However, such is the case with any measure or intervention explicitly relating to lifespan extension. So while extra guarding against motivated cognition is in order when dealing with one’s personal future non-existence and the postponing thereof, saying “you’re doing it because of the warm fuzzies!” isn’t sufficient rejection of death escapism.
The cryonics buyer may well answer “well, yes, that, and also, you know, the whole ‘potential future reanimation’ part”. You still have to engage with the object level.
Should a monk who has taken vows have a sin budget, because the flesh is weak?
You seem conflicted, believing you should not value your own life over others’, but continuing to do so; then justifying yielding to temptation on the grounds that you are tempted.
one is essentially admitting that cryonics isn’t about altruism, it’s about yourself.
Of course it is. Has it ever been presented as anything else, as “Escape death so you can do more for other people”? Support for cryonics is for the sake of everyone, but signing up to it is for oneself alone.
Should a monk who has taken vows have a sin budget, because the flesh is weak?
If that helps them achieve their vows overall.
I did try valuing the lives of others equally before. It only succeeded in making me feel miserable and preventing me from getting any good done. Tried that approach, doesn’t work. Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.
Of course it is. Has it ever been presented as anything else
Should a monk who has taken vows have a sin budget, because the flesh is weak?
If that helps them achieve their vows overall.
The opportunity for self-serving application of this principle casts a shadow over all applications. I believe this hypothetical monk’s spiritual guide would have little truck with such excuses, rest and food, both in strict moderation, being all the body requires. (I have recently been reading the Sayings of the Desert Fathers and St John Climacus’ “Ladder of Divine Ascent”, works from the first few centuries of Christianity, and the rigours of the lives described there are quite extraordinary.)
Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.
“It’s not me that wants this, it’s this other thing I share this body with.” Personally, that sounds to me like thinking gone wrong, whether you yield to or suppress this imaginary person. You appear to be identifying with the altruist faction when you write all this, but is that really the altruist faction speaking, or just the egoist faction pretending not to be? Recognising a conflict should be a first step towards resolving it.
Of course it is. Has it ever been presented as anything else
Once people start saying things like “It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now” or “If you don’t sign up your kids for cryonics then you are a lousy parent”, it’s hard to avoid reading a moral tone into them.
These are moral arguments for supporting cryonics, rather than for signing up oneself. BTW, if it’s sinfully self-indulgent to sign up oneself, how can you persuade anyone else to? Does a monk preach “eat, drink, and be merry”?
Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?
The opportunity for self-serving application of this principle casts a shadow over all applications.
[…]
Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?
I’m confused over what exactly your position is. The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism, whereas the second excerpt seems like a criticism of that position.
My position is that (1) the universal practice of valuing oneself over others is right and proper (and I expect others to rightly and properly value themselves over me, it being up to me to earn any above-baseline favour I may receive), (2) there is room for discussion about what base level of compassion one should have towards distant strangers (I certainly don’t put it at zero), and (3) I take the injunction to love one’s neighbour as oneself as a corrective to a too low level of (2) rather than as a literal requirement, a practical rule of thumb for debiasing rather than a moral axiom. Perfect altruism is not even what I would want to want.
The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism
I’m drawing out what I see as the implications of holding (which I don’t) that we ought to be perfectly altruistic, while finding (as I do) that in practice it is impossible. It leads, as you have found, to uneasy compromises guiltily taken.
I did say right in my original comment (emphasis added):
By going with the leisure budget argument, one is essentially admitting that cryonics isn’t about altruism, it’s about yourself. And of course, there is nothing wrong with that, since none of us is a 100% complete altruist who cares nothing about themselves, nor should we even try to idealize that kind of a person.
I will attempt a resolution: other people are as imortant as me, in pirncipal, since I am not objectively anything special—but I should concentrate my efforts on myself and those close to me, becuase I understand my and their needs better, and can therefore be more effective.
I don’t think that’s a sufficient or effective compromise. If I’m given a choice between saving the life of my child, or the lives of a 1000 other children, I will always save my child. And I will only feel guilt to the extent that I was unable to come up with a 3rd option that saves everybody.
I don’t do it for some indirect reason such as that I understand my children’s needs better or such. I do it because I value my own child’s life more, plain and simple.
You might as well have asked: special to whom>? Even if there is no objective importance or specialiness anywhere, it still follows that I have no objective importance ort specialness.
For the record, you do have a limited supply of willpower. I’m guessing those monks either had extraordinary willpower reserves or nonstandard worldviews that made abstinence actually easier than sin.
It seems they practice that willpower muscle very explicitly for hours every day. Abstinence should actually be pretty easy considering you have very little else to drain your willpower with.
Looking into your link now, but it was my understanding that the effect was weaker if the participant didn’t believe in it, not nonexistent (i.e. disbelieving in ego depletion has a placebo effect.)
Wikipedia, Font Of All Knowledge concurrs:
An individual’s perceived level of fatigue has been shown to influence their subsequent performance on a task requiring self-regulation, independent of their actual state of depletion.[14] This effect is known as illusory fatigue. This was shown in an experiment in which participants engaged in a task that was either depleting or non-depleting, which determined each individual’s true state of depletion. Ultimately, when participants were led to believe their level of depletion was lower than their true state of depletion, they performed much better on a difficult working memory task. This indicates that an increased perceived level of fatigue can hinder self-regulatory performance independent of the actual state of depletion.
[...]
An experiment by Carol Dweck and subsequent work by Roy Baumeister and Kathleen Vohs has shown that beliefs in unlimited self-control helps mitigate ego depletion for a short while, but not for long. Participants that were led to believe that they will not get fatigued performed well on a second task but were fully depleted on a third task.[16]
ETA: It seems the Wikipedia citation is to a replication attempt of your link. They found the effect was real, but it only lessened ego depletion—subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.
They found the effect was real, but it only lessened ego depletion—subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.
I’m not sure the word “placebo” makes sense when you are discussing purely psychological phenomena. Obviously any effects will be related to psychology- its not like they gave them a pill.
I … think it’s supposed to be regulated at least partially by glucose levels? So in some of the experiments, they were giving them sugar pills, or sugar water or something? I’m afraid this isn’t actually my field :(
But of course, no phenomenon is purely psychological (unless the patient is a ghost.) For example, I expect antidepressant medication is susceptible to the placebo effect.
Take, for example, the reaction to our claim that the glucose version of the resource argument is false (Kurzban 2010a ). Inzlicht & Schmeichel, scholars who have published widely in the willpower-as-resource literature, more or less casually bury the model with the remark in their commentary that the “mounting evidence points to the conclusion that blood glucose is not the proximate mechanism of depletion.” ( Malecek & Poldrack express a similar view.) Not a single voice has been raised to defend the glucose model, and, given the evidence that we advanced to support our view that this model is unlikely to be correct, we hope that researchers will take the fact that none of the impressive array of scholars submitting comments defended the view to be a good indication that perhaps the model is, in fact, indefensible. Even if the opportunity cost account of effort turns out not to be correct, we are pleased that the evidence from the commentaries – or the absence of evidence – will stand as an indication to audiences that it might be time to move to more profitable explanations of subjective effort.
While the silence on the glucose model is perhaps most obvious, we are similarly surprised by the remarkably light defense of the resource view more generally. As Kool & Botvinick put it, quite correctly in our perception: “Research on the dynamics of cognitive effort have been dominated, over recent decades, by accounts centering on the notion of a limited and depletable ‘resource’” (italics ours). It would seem to be quite surprising, then, that in the context of our critique of the dominant view, arguably the strongest pertinent remarks come from Carter & McCullough, who imply that the strength of the key phenomenon that underlies the resource model – two-task “ego-depletion” studies – might be considerably less than previously thought or perhaps even nonexistent. Despite the confidence voiced by Inzlicht & Schmeichel about the two-task findings, the strongest voices surrounding the model, then, are raised against it, rather than for it. (See also Monterosso & Luo , who are similarly skeptical of the resource account.)
Indeed, what defenses there are of the resource account are not nearly as adamant as we had expected. Hagger wonders if there is “still room for a ‘resource’ account,” given the evidence that cuts against it, conceding that “[t]he ego-depletion literature is problematic.” Further, he relies largely on the argument that the opportunity cost model we offer might be incomplete, thus “leaving room” for other ideas.
It seems to me that, even valuing your own life and the lives of others equally, it’s not necessarily inconsistent to pay much more for cryonics than it would cost to save a life by normal altruist means. Cryonics could save your life, and malaria nets could save somebody else’s life, but these two life-savings are not equal. If you’re willing to pay more to save a 5-year-old than an 85-year-old, then for some possible values of cryonics effectiveness, expectation of life quality post-resuscitation, and actual cost ratios, shutting up and multiplying could still favor cryonics.
If this argument carries, it would also mean that you should be spending money on buying cryonics for other people, in preference to any other form of altruism. But in practice, you might have a hard time finding people who would be willing to sign up for cryonics and aren’t already willing/able to pay for it themselves, so you’d probably have to default back to regular altruism.
If you do have opportunities to buy cryonics for other people, and you value all lives equally, then you’ve still got the problem of whether you should sign yourself up rather than somebody else. But multiplying doesn’t say you can’t save yourself first there, just that you have no obligation to do so.
If I value my life equally to the lives of others (...)
Edit: Since you don’t in terms of your revealed preferences, are you aspiring to actually reach such a state? Would an equal valuation of your life versus a random other life (say, in terms of QALYs) be a desirable Schelling point, or is “more altruistic” always preferable even at that point (race to the bottom)?
Depends on which part of my brain you ask. The altruistic faction does aspire to it, but the purely egoist faction doesn’t want to be eradicated, and is (at least currently) powerful enough to block attempts to eradicate it entirely. The altruist faction is also not completely united, as different parts of my brain have differing opinions on which ethical system is best, so e.g. my positive utilitarian and deontological groups might join the egoist faction in blocking moves that led to the installation of values that were purely negative utilitarian.
If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I’ll take it.
I don’t understand this values vs preferred values thing. It sounds like “if I get a chance to go against my actual values in favor of some fictional values, I’ll take it” which seems like a painful strategy. If you get to change your values in some direction permanently, it might work and I would understand why you’d want to change your cognition so that altruism felt better, to make your values more consistent.
It sounds like “if I get a chance to go against my actual values in favor of some fictional values, I’ll take it” which seems like a painful strategy.
This isn’t really different than any other situation where people wish they had a different characteristic than they do. Sometimes such preferences are healthy and benign in the case of other mental states, e.g., preferring to acquire more accurate beliefs. I don’t see any reason to think they can’t be healthy and benign in the case of preferring to change one’s preferences (e.g., to make them more form a more consistent system, or to subordinate them to reflective and long-term preferences).
I would understand why you’d want to change your cognition so that altruism felt better, to make your values more consistent.
As I noted to Chris above, consistency isn’t necessarily the highest goal here. The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.
This isn’t really different than any other situation where people wish they had a different characteristic than they do.
I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
I don’t see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn’t to feel more empathy toward strangers. The goal is to help strangers suffer and die less.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
Suppose you see a child drowning in your neighbor’s pool, and you can save the child without incurring risk. But, a twist: You have a fear of water.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that’s your precondition for an interesting or compelling moral argument, then you’re bound to be disappointed.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don’t care at all about your aversion to bodies of water in other contexts (e.g., you aren’t pining to join any swim teams). For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one.
Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child
What I’m saying is that if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk although I’m probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
I really don’t understand. Either you are that selfish, or you aren’t. I’m that selfish, but also happily donate money. There’s no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that.
No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don’t want to take a pill that will turn me into a murderer who loves that he’s a murderer, even though if I took such a pill I’d be happy I did.
if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk
If your risk of dying is high enough, then you shouldn’t try to save the child, since if you’re sure to die the expected value may well be negative. Still, I don’t see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. ‘My altruism is limited, and I’m perfectly OK with how limited it is and wouldn’t take a pill to become more altruistic if one were freely available’ is a coherent position, though it’s not one I happen to find myself in.
If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
Then you understand the thing you were confused about initially: “I don’t understand this values vs preferred values thing.” Whether you call hydrophobia a ‘value’ or not, it’s clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. If you’re inclined to murder people, then it’s unlikely that my saying ‘please don’t do that, it would be a breach of your moral obligations’ is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of ‘bad’ (and of ‘kill’, and of ‘to’...). And it’s bad to strongly desire to kill people; and it’s bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
I really don’t understand. Either you are that selfish, or you aren’t.
People aren’t any level of selfish consistently; they exhibit more selfishness in some situations than others. Kaj’s argument is that if I prize being altruistic over being egoistic, then it’s reasonable for me to put no effort into eliminating my aversion to cryonics, even though signing up for cryonics would exhibit no more egoism than the amount of egoism revealed in a lot of my other behaviors.
‘You ate those seventeen pancakes, therefore you should eat this muffin’ shouldn’t hold sway as an argument against someone who wants to go on a diet. For the same reason, ‘You would spend thousands of dollars on heart surgery if you needed it to live, therefore you should spend comparable amounts of money on cryonics to get a chance at continued life’ shouldn’t hold sway as an argument against someone who wants above all else to optimize for the happiness of the whole human species. (And who therefore wants to want to optimize for everyone’s aggregate happiness.)
I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
I’d love to see someone try to pick units with which to compare those two values. :)
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. [...] Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren’t in perfect alignment with yours. Because of that paragraph, it’s pretty difficult for me to look at anything else you said rationally.
Here’s some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.
I’m not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I’m really more into the Socratic approach than the Crusader approach.) I’m making two points about the structure of utilitarian reasoning:
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects. It’s not a bold or interesting claim; I could equally well have said ‘it’s good for polar bears to have preferences that cause them to do good things’. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are.
I’ll also say (equally non-facetiously): I don’t endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren’t exactly 100% the person you wish you were. Those aren’t good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don’t have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects.
Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
I don’t think I voluntarily do bad things according to my values, ever. I also don’t understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past.
Other people do bad things according to my values, but if their actions are truly voluntary and I can’t point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind.
Do you take something about 1 or 2 to be unduly aggressive or dismissive?
No.
I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
To me that seems like you’re ignoring what’s normally persuasive to people out of plain stubbornness. The reason I’m bringing this up is because I have altruistic goals too, and I find such talk damaging to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them
Having ideals is fine if you make it absolutely clear that’s all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don’t make or break a good person.
I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
I don’t think I voluntarily do bad things according to my values, ever.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Connotatively comparing them to murderers
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I find such talk damaging to them.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
Having ideals is fine if you make it absolutely clear that’s all that they are.
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
I don’t know what it means to add to that that the other player ’shouldn’t cooperate.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ’It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Then your usage of ‘bad’ is very unusual.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.
I don’t understand this values vs preferred values thing.
In Yvain’s liking/wanting/endorsing categorization, “preferred values” corresponds to any values that I approve of. Another way of saying it would be that there are modules in my brain which execute one set of behaviors, whereas another set of modules would prefer to be engaging in some other set of behaviors. Not really different from any situation where you end up doing something that you think that you shouldn’t.
If you approve of these values, why don’t you practice them? It seems to me that approving of a value means you want others to practice it, regardless of whether you want it for yourself.
It seems to me you’re looking for temporal consistency. My problem understanding you stems from the fact that I don’t expect my future self to wish I had been any more altruistic than I’m right now. I don’t think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain’s descriptions fit this picture.
I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it’s more a “I’m doing this thing even though I know it’s wrong” thing: not a conflict between one’s current and future self, but rather a conflict between the good of myself and the good of others.
Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don’t actually want to do.
Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I’m doing.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body.
Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can’t expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain.
I can think of several reasons why I’d want to assume a conflicted identity and almost all of them involve signalling and social convenience.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do.
I think the difference here is that, from the inside, it often doesn’t feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I’d have been paralyzed with inaction. But when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn’t resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.)
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense—if the circumstances were to change and I’d have to make the same decision again, the loser of this “round” might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all.
That’s also why it doesn’t seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated “wanting” with “the desire of the brain-faction that happens to be the strongest at the time when one’s brain is sampled”, then you could kind of frame it like a temporal conflict… but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it’s just the relative strength of those wants that varies.
when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain
Ok. To me it most often feels like I’m observing that some parts of my brain struggle and that I’m there to tip the scales, so to speak. This doesn’t necessarily lead to a desirable outcome if my influence isn’t strong enough. I can’t say I feel conflicted about in what direction to tip the scales, but I assume this is just because I’m identifying with a part of my brain that can’t monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don’t remember what it felt like, nor would I be able to tell you how this transformation might have happened.
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense
This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you’re X that’s trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn’t want to do?
Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience:
I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision.
I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle.
I could tip the scale if I just decided what direction I wanted to tip them in, but I’m genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the “carriers” of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn’t feel quite right, but it’s the best I can manage right now.)
I now realize that I hadn’t previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I’ll have to rethink what I’ve said here in light of that.
I think that I identify with each brain-faction that has managed to “install” “its” preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don’t identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.
It means different “modules” of your mind have different values, and on reflection you favor one module over the other.
Part of why this still sounds problematic is that we have a hard time unravelling the “superego” (the metaphorical mental module responsible for enforcing nonselfish/pro-social values) from full and complete moral cognition. Thus, many people believe they believe they should be selfless to the point of self-sacrificing, even though, if you cloned them and actually made the clone that selfless, they would not endorse the clone as being a superior version of themselves.
It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there’s no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.
I think the scale on which it is done is the main thing here. Currently, cryonics is performed so infrequently that there isn’t much infrastructure for it. So it is still fairly expensive compared to the amount of expected utility—probably close to the value implied by regulatory tradeoffs ($5 million per life). On a large, industrial scale I expect it to be far better value than anything Givewell is going to find.
This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.
If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).
This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.
At some point, you will have to specialize in cryobiology and neuroscience (with some information science in there too) in order to process the data. I can understand wanting to see the data for yourself, but expecting everyone to process it rationally and in depth before they get on board isn’t necessarily realistic for a large movement. Brian Wowk has written a lot of good papers on the challenges and mechanisms of cryopreservation, including cryoprotectant toxicity. Definitely worth reading up on. Even if you don’t decide to be pro-cryonics, you could use a lot of the information to support something related, like cryopreservation of organs.
If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).
Until you have enough information to know, with very high confidence, that information-theoretic death has happened in the best cases, you can’t really assign it all a $0 value in advance. You could perhaps assign a lower value than the cost of the project, but you would have to have enough information to do so justifiably. Ignorance cuts both ways here, and cryonics has traditionally been presented as an exercise in decision-making under conditions of uncertainty. I don’t see a reason that logic would change if there are millions of patients under consideration. (Although it does imply more people with an interest in resolving the question one way or another, if possible.)
I don’t quite agree that the value would be zero if it failed. It would probably displace various end-of-life medical and funeral options that are net-harmful, reduce religious fundamentalism, and increase investment in reanimation-relevant science (regenerative medicine, programmable nanodevices, etc). It would be interesting to see a comprehensive analysis of the positive and negative effects of cryonics becoming more popular. More organs for transplantation could be one effect worth accounting for, since it does not seem likely that we will need our original organs for reanimation. There would certainly be more pressure towards assisted suicide, so that could be positive or negative depending how you look at it.
If I value my life equally to the lives of others, it seems pretty obvious that there’s no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.
This just shifts the question to whether promoting cryonics is an effective form of general consequentialist do-gooding. There are a lot of factors to consider, in regards to large-scale cryonics:
Effects on funding/enthusiasm for new technologies due to alignment of incentives.
Effects on mitigation of existential risks, long-term economic policies, and investments.
How much cheaper it gets when practiced on a large industrial scale.
How much more reliable it becomes when practiced on a large industrial scale.
Displacement of wasteful funeral practices.
Displacement of wasteful end-of-life medical practices.
Reduced religious fundamentalism, due to less belief in innate immortality.
Reduced luxury purchases due to altered time preferences.
Relative number of people who could be saved by cryonics but not by any other available technology.
There are some plausible negatives effects to consider as well:
A larger industry has more opportunities for corruption and mistakes, so it would probably be more regulated on a larger scale, resulting in higher administrative costs and restrictions on experimentation.
People might be less concerned with preventing some health problems (while being more concerned with others, including traffic fatalities and heart disease) as the result of risk compensation.
The pressure to cure diseases in the short term could be reduced. Some patients with terminal cases might decide to die earlier than they otherwise would (which would turn out to be permanent if cryonics fails to work for them).
However, the costs aren’t likely to outweigh (or even significantly approach) the savings and benefits in my estimation. In many cases the apparent negatives (e.g. people checking out early, or reducing the overt pressure on scientists to cure cancer ASAP) could be a blessing in disguise (less suffering, less bad data). The regulation aspect probably actually benefits from cryonics being a larger and more visible industry, as the alternative is for regulations on the topic to be passed by non-sympathetic outside industries such as death care (funeral directors associations) and tissue banking (nontransplant anatomical donation organizations).
As it stands, LN2 costs of storage are fairly minimal (around $10/year per neuro patient, going by CI figures, or $1000 per patient assuming 1% interest on a long-term deposit), and can be dramatically reduced by larger scale storage spaces. Most of the money is going into standby, administration, equipment, and so forth, which are also likely to be a) scale friendly and b) already heavily invested in by the conventional medical community.
There’s also the long-term financial services aspect. A large chunk is going into long-term savings / low-risk investment. My understanding is that this promotes economic growth.
The funds set aside for cryonics reanimation will eventually go to medical research and infrastructure to reanimate patients. This could take more than one form. Programmed nanorepair and/or uploading are the currently expected forms for today’s patients, but that expectation does not necessarily hold for all future forms of cryonics. We might, at some point in the next few decades, reduce the brain damage factor to a point where biologically based regenerative techniques (tissue printing, stem cells, synthetic microbes, etc.) are plausible enough on their own. These technologies, or at least the basic science needed to achieve them, would obviously have uses outside the domain of cryonics.
So the direct and indirect results of cryonics seem to me to be good enough that a non-hypocritical EA might plausibly think it is a good idea to promote by whatever means they can. Signing up for it oneself might be useful to boost the credibility of discussing it with friends, especially if you have a social group that includes wealthy people who might donate to cryonics research or assist the transition to a larger infrastructure down the road somewhere. The question is whether this can beat something less expensive like an Adwords campaign of equivalent ongoing cost (say $80/month).
It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there’s no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.
Of course, this is not a new argument, and there are a few standard responses to it. The first one is that I don’t actually value my life equally to that of everyone else’s life, and that it’s inconsistent to appeal to that when I don’t appeal to it in my life in general. And it’s certainly true that I do actually value my own life more than I value the life of a random stranger, but I do that because I’m human and can’t avoid it, not because my values would endorse that as a maximally broad rule. If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I’ll take it.
The other standard argument is that cryonics doesn’t need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it requires that I’m interested enough in cryonics that I get enough fuzzy points from buying cryonics to make up whatever I lose in exchange. And it feels like once you take the leisure budget route, you’re implicitly admitting that this is about purchasing fuzzies, not utilons, which makes it a little odd to apply to all those elaborate calculations which are often made with a strong tone of moral obligation. If one is going to be a utilitarian and use the strong tone of moral obligation, one doesn’t get to use it to make the argument that one should invest a lot of money on saving just a single person, and with highly uncertain odds at that.
By going with the leisure budget argument, one is essentially admitting that cryonics isn’t about altruism, it’s about yourself. And of course, there is nothing wrong with that, since none of us is a 100% complete altruist who cares nothing about themselves, nor should we even try to idealize that kind of a person. And I’m not saying that there’s anything wrong with signing up for cryonics—everyone gets to use their fuzzies budget the way they prefer, and if cryonices gives you the most fuzzies, cool. But if one doesn’t get major fuzzies out of cryo, then that ought to be considered just as reasonable as well.
I’ve had thoughts along similar lines. But it seems like there’s a “be consistent about your selfishness” principle at work here. In particular, if...
...you are generally willing to spend $X / month for something that has a significant chance of bringing you a very large benefit, like saving your life...
...where $X /month is the cost of being signed up for cryonics (organization membership + life insurance)...
… and you think cryonics has a significant chance of working...
It seems kind of inconsistent to not be signed up for cryonics.
(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of “me”.)
Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer ‘2 + 2 = 5’ on a test or sometimes giving ‘2 + 2 = 5’ and other times ′ 2 + 2 = 4′, the latter is probably preferable. Kaj’s argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.
If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it’s certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.
I endorse this summary.
While I don’t think that there’s anything wrong with preferring to be consistent about one’s selfishness, I think it’s just that: a preference.
The common argument seems to be that you should be consistent about your preferences because that way you’ll maximize your expected utility. But that’s tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I’ve written a partial post about this, which I can try to finish if people are interested.)
For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I’m not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as “preferences”, while the egoist faction is more motivated by just doing whatever feels good at the moment and isn’t that interested in the long-term consequences.
Another way of putting this: If you divide the things you do between “selfish” and “altruistic” things, then it seems to make sense to sign up for cryonics as an efficient part of the “selfish” component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between “Near mode decisions” and “Far mode decisions”. Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.
A huge number of people save money for a retirement that won’t start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.
That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn’t, then I suspect many of those who currently do it wouldn’t do it for similar reasons to those why they don’t sign up for cryonics.
I suspect most people’s reasons for not signing up for cryonics amount to “I don’t think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal’s Mugging.” I don’t see how that would apply to retirement—would people in such a society seriously think they have only a very small chance of surviving until retirement age?
0.23% is not a significant chance.
(Disclaimer: I absolutely promise that I am not evil.)
Question: why the hell not? My brain processed this kind of question for the first time around fourth grade, when wanting special privileges to go on a field trip with the other kids despite having gotten in trouble. The answer I came up with then is the one I still use now: “why me? Because of Kant’s Categorical Imperative” (that is, I didn’t want to live in a world where nobody went on the field trip, therefore I should get to go on it—though this wasn’t exactly clear thinking regarding the problem I really had at the time!). I would not want to live in a world where everyone kept their own and everyone else’s lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!
I also do give to charity, do pay my taxes, and do support state-run social-welfare programs. So I’m not advocating total selfishness. I’m just proposing a heuristic: before advocating a certain level of altruism, check whether you’re ok with that level of altruism becoming a Categorical Imperative, such that the Altruism Fairy brainwashes everyone into that level precisely.
In which case, yes, one should value one’s own life over charity levels. After all, it’s exactly what the charity recipients will do!
(Again, disclaimer: I swear I’m not evil.)
I think that the argument you’re going for here (though I’m not entirely sure, so do correct me if I’m misinterpeting you) is “if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic”.
And I agree! I think every altruist should take care of themselves first—for various reasons, including the one you mentioned, and also the fact that miserable people aren’t usually very effective at helping others, and because you can inspire more people to become altruistic if they see that it’s possible to have an awesome time while being an altruist.
But of course, “I should invest in myself because having an awesome life lets me help me others more effectively” is still completely compatible with the claim of “I shouldn’t place more intrinsic value on others than in myself”. It just means you’re not being short-sighted about it.
More like, “If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn’t follow it in the first place.”
Or, put simply, “Your supposed all-altruism is self-contradictory in the limit.” Hence my having to put a disclaimer saying I’m not evil, since that’s one of the most evil-villain-y statements I’ve ever made.
Of course, there are complications. For one thing, most people don’t have the self-destructive messiah complex necessary for total altruism, so you can’t apply first-level superrationality (ie: the Categorical Imperative) as including everyone. What I do endorse doing is acting with a high-enough level of altruism to make up for the people who don’t act with any altruism while also engaging in some delta of actual non-superrational altruism.
How to figure out what level of altruistic action that implies, I have no idea. But I think it’s better to be honest about the logically necessary level of selfishness than to pretend you’re being totally altruistic but rationalize reasons to take care of yourself anyway.
Sorry, I don’t follow. If the logical result of accepting full misery to oneself would be everyone being miserable, why wouldn’t the altruists just reason this out and not accept full misery to themselves? “Valuing everyone the same as yourself” doesn’t mean you’d have to let others treat you any way they like, it just means you’d in principle be ready for it, if it was necessary.
(I think we’re just debating semantics rather than disagreeing now, do you agree?)
I think we have slightly different values, but are coming to identical practical conclusions, so we’re agreeing violently.
EDIT: Besides, I totally get warm fuzzies from being nice to people, so it’s not like I don’t have a “selfish” motivation towards a higher level of altruism, anyway. SWEAR I’M NOT EVIL.
You said you’d prefer everyone to live awesome lives, I’m not sure how that could be construed as evil. :)
Serious answer: Even if I don’t endorse it, I do feel a pang of guilt/envy/low-status at being less than 100% a self-impoverishing Effective Altruist, which has been coming out as an urge to declare myself not-evil, even by comparison.
Joke answer: eyes flash white, sips tea. SOON.
Okay, in that case you should stop feeling those negative emotions right now. :) Nobody here is a 100% self-impoverishing EA, and we ended up agreeing that it wouldn’t even be a useful goal to have, so go indulge yourself in something not-at-all-useful-nor-altruistic and do feel good about it. :)
How confident of this are we?
I mean, there are many tasks which can lead to my happiness. If I perform a large subset of those tasks for my own benefit, they lead to a certain happiness-level for me… call that H1. If I perform a small subset of those tasks for everyone’s benefit, they lead to a different happiness-level, H2, for everyone including me. H2 is, of course, much lower than H1… in fact, H2 is indistinguishable from zero, really, unless I’m some kind of superstar. (I’m not aggregating across people, here, I’m just measuring how happy I am personally.)
So far, so good.
But if everyone else is also performing a small subset of those tasks for everyone’s benefit, then my happiness is N*H2. H2 is negligible, but N is large. Is (N*H2) > H1?
I really have no idea. On the face of it, it seems implausible. On the other hand, comparative advantage is a powerful force. We’ve discovered that when it comes to producing goods and services, for example, having one person performing a single task for everyone does much better than having everyone do everything for themselves.
Perhaps the same is true for producing happiness?
Which is not necessarily an argument for altruism in the real world, but in this hypothetical world where everyone acts with maximal altruism, maybe the end result is everyone is having a much more awesome life… they’re simply having it thanks to the efforts of a huge community, rather than entirely due to their own efforts.
Then again, that sounds like a pretty good description of the real world I live in, also.
I think this is why it feels squicky trying to assign a monetary value to my life; part of me thinks it’s selfish to assign any more value to my life than Givewell’s stated cost to save a stranger’s life ($1700-ish??) But I know I value it more than that. I wouldn’t risk my life for a paycheck.
Do you drive to work?
I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident. And the risk is very likely less than 1/1000 given the years that I’ve been biking accident-free, so there’s a multiplication there.
I rather suspect it depends primarily on where you bike. Biking through streets of Manhattan has different risk than biking on rural Wyoming roads.
Driving under the same conditions has similar risk disparity.
I rather doubt that—do you have data?
I seem to remember the answer being that cycling is more dangerous per mile than driving, but that the increase in physical fitness more than compensates in all-cause mortality terms. The first paper I found seems to point to the same conclusion.
I don’t know how that would be adjusted in someone that already has fitness habits. It probably also depends on how well developed the cycling infrastructure in your town is, but I’ve never seen any actual data on that either.
In my experience bicycling is much safer. I have been cycling more or less everyday since I was at least since I was 8. and have never been in a life-threatening accident. however, while traveling by car, I have been in 2 or 3 potential life threatening crashes. But this will be very dependent of location culture and personal variables.
Do you know of a safer way to commute that lets you keep the same range of possible jobs?
If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?
Depends: has the treatment been proven to work before?
(Yes, I’ve heard the probability calculations. I don’t make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)
Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions.
(I think. People who know what they’re talking about, feel free to correct me)
You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities.
Now, given that cryonics has not been seen to work on, say, rats, I don’t see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me.
Of course, then there’s the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they’re doing with you. Generally, every answer I’ve heard to that question made my skin crawl.
I bet you would. Lots of jobs have components (such as extra stress, less physical activity, or living in a dangerous or dirty city) that reduce life expediency. Unless you pick the job which maximizes your life span, you would effectively be risking your life for a paycheck. Tradeoffs are impossible to escape, even if you don’t explicitly think about them.
In context, it seems uncharitable to read “risk my life” to include any risk small enough that taking it would still be consistent with valuing one’s own life far above $1700.
Remember, your life has instrumental value others don’t; if you risk your life for a paycheck, you’re risking all future paychecks as well as your own life-value. The same applies to stressing yourself out obsessively working multiple jobs, robbing banks, selling your redundant organs … even simply attempting to spend all your money on charity and the cheapest of foods tends too be a fairly bad suggestion for the average human (although if you think you can pull it off, great!)
I imagine that a lot of people on Less Wrong get off on having someone tell them “with a strong tone of moral obligation” that death can be defeated and that they simply must invest their money in securing their own immortality. Even if it isn’t a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they’ll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.
Just asking, were you trying to make that sound awful and smug? Because that honestly sounds like a future I don’t want to wake up in.
I want to wake up in the future where people have genuine compassion for the past, and are happy to welcome the “formerly dead” to a grand new life, hopefully even including their friends and loved ones who also made it successfully to “the Future”. If the post-cryonic psychological counsellors of the future woke me up with, “Congratulations, you made the right business decision!”, then I would infer that things had gone horribly wrong.
Lost in the wilderness, I think we should go North; you, South. If I find help, but learn that you died, my first thought will not be “neener neener told you so”.
Interesting...
Is is possible cryonic wakers might be treated very poorly? Perhaps stigmatized?
I’m very ignorant of what all is involved in either “end” of cryonics, but what if, say, the cost of resurrecting the frozen person is prohibitively high and future people lobby to stop their waking up? And even the ones who do wake up are treated like pariahs?
It might play out like the immigaration situation in the US: A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants.
I can already hear the arguments now…
“They won’t know if we don’t wake them up. Besides every one we wake costs us X resources which damages Y lives by Z%.”
How is that any different from saying “a nation, founded by slaveowners, that is now composed of a big chunk of citizens who hate slaveowners”? Certainly the fact that your ancestors benefited from being slaveowners is no reason why you should support slaveowners now.
Yep.
While genuine compassion is probably the ideal emotion for a post-cryonic counselor to actually show, it’s the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now. There’s nothing wrong with that: “Everyone who said I was stupid is wrong and gets forced to admit it.” is probably one of the top ten most common fantasies and there’s nothing wrong with spending your leisure budget on indulging a fantasy. Especially if it has real world benefits too.
That’s… actually kinda sad, and I think I’m going to go feed my brain some warm fuzzies to counter it.
Trying to live forever out of spite instead of living well in the here and now that’s available? Silly humans.
Don’t worry, poiuyt is making all of this up. I don’t personally know of anyone to whom this imaginary scenario applies. The most common sentiment about cryonics is “God dammit I have to stop procrastinating”, hence the enjoinders are welcome; as for their origin point, well, have you read HPMOR up to Ch. 96?
I feel that I am being misunderstood: I do not suggest that people sign up for cryonics out of spite. I imagine that almost everyone signed up for cryonics does so because they actually believe it will work. That is as it should be.
I am only pointing out that being told that I am stupid for signing up for cryonics is disheartening. Even if it is not a rational argument against cryonics, the disapproval of others still affects me. I know this because my friends and family make it a point to regularly inform me of the fact that cryonics is “a cult”, that I am being “scammed out of my money” by Alcor and that even if it did work, I am “evil and wrong” for wanting it. Being told those things fills me with doubts and saps my willpower. Hearing someone on the pro-cryonics side of things reminding me of my reasons for signing up is reassuring. It restores the willpower I lose hearing those around me insulting my belief. Hearing that cryonics is good and I am good for signing up isn’t evidence that cryonics will work. Hearing that non-cryonicists will “regret” their choice certainly isn’t evidence that cryonics is the most effective way to save lives. But it is what I need to hear in order to not cave in to peer pressure and cancel my policy.
I get my beliefs from the evidence, but I’ll take my motivation from wherever I can find it.
Eliezer, I have been a frequent and enthusiastic participant on /r/hpmor for years before I decided to buck up and make a LessWrong account.
I don’t recall someone answering my question in the other place I posted it, so I might as well ask you (since you would know): provided I am unwilling to believe current cryonic techniques actually work (even given a Friendly superintelligence that wants to bring people back), where can I be putting money towards other means of preserving people or life-extension in general?
Gwern had a posting once on something called “brain plastination”, which supposedly works “better” in some sense than freezing in liquid nitrogen, even though that still relies on em’ing you to bring you back, which frankly I find frightening as all hell. Is there active research into that? Into improved cryonics techniques?
Or should I just donate to anti-aging research on grounds that keeping people alive and healthy for longer before they die is a safer bet than, you know, finding ways to preserve the dead such that they can be brought back to life later?
The Brain Preservation Foundation may be what you’re looking for.
There’s good and bad spite. Good spite is something like, “They call me mad! But I was right all along. Muahahaha!” and feeling proud and happy that you made the right choice despite opposition from others. Bad spite is something like, “I was right and they were wrong, and now they’re suffering for their mistakes. Serves them right”. One is accomplishment, the other is schadenfreude.
Yes, it is a great psychological coping mechanism. Death is such a deeply personal topic that it would be folly to assume fuzzies, or the avoidance of frighties, didn’t factor in.
However, such is the case with any measure or intervention explicitly relating to lifespan extension. So while extra guarding against motivated cognition is in order when dealing with one’s personal future non-existence and the postponing thereof, saying “you’re doing it because of the warm fuzzies!” isn’t sufficient rejection of death escapism.
The cryonics buyer may well answer “well, yes, that, and also, you know, the whole ‘potential future reanimation’ part”. You still have to engage with the object level.
Should a monk who has taken vows have a sin budget, because the flesh is weak?
You seem conflicted, believing you should not value your own life over others’, but continuing to do so; then justifying yielding to temptation on the grounds that you are tempted.
Of course it is. Has it ever been presented as anything else, as “Escape death so you can do more for other people”? Support for cryonics is for the sake of everyone, but signing up to it is for oneself alone.
If that helps them achieve their vows overall.
I did try valuing the lives of others equally before. It only succeeded in making me feel miserable and preventing me from getting any good done. Tried that approach, doesn’t work. Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.
Once people start saying things like “It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now” or “If you don’t sign up your kids for cryonics then you are a lousy parent”, it’s hard to avoid reading a moral tone into them.
The opportunity for self-serving application of this principle casts a shadow over all applications. I believe this hypothetical monk’s spiritual guide would have little truck with such excuses, rest and food, both in strict moderation, being all the body requires. (I have recently been reading the Sayings of the Desert Fathers and St John Climacus’ “Ladder of Divine Ascent”, works from the first few centuries of Christianity, and the rigours of the lives described there are quite extraordinary.)
“It’s not me that wants this, it’s this other thing I share this body with.” Personally, that sounds to me like thinking gone wrong, whether you yield to or suppress this imaginary person. You appear to be identifying with the altruist faction when you write all this, but is that really the altruist faction speaking, or just the egoist faction pretending not to be? Recognising a conflict should be a first step towards resolving it.
These are moral arguments for supporting cryonics, rather than for signing up oneself. BTW, if it’s sinfully self-indulgent to sign up oneself, how can you persuade anyone else to? Does a monk preach “eat, drink, and be merry”?
Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?
[…]
I’m confused over what exactly your position is. The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism, whereas the second excerpt seems like a criticism of that position.
My position is that (1) the universal practice of valuing oneself over others is right and proper (and I expect others to rightly and properly value themselves over me, it being up to me to earn any above-baseline favour I may receive), (2) there is room for discussion about what base level of compassion one should have towards distant strangers (I certainly don’t put it at zero), and (3) I take the injunction to love one’s neighbour as oneself as a corrective to a too low level of (2) rather than as a literal requirement, a practical rule of thumb for debiasing rather than a moral axiom. Perfect altruism is not even what I would want to want.
I’m drawing out what I see as the implications of holding (which I don’t) that we ought to be perfectly altruistic, while finding (as I do) that in practice it is impossible. It leads, as you have found, to uneasy compromises guiltily taken.
I did say right in my original comment (emphasis added):
I will attempt a resolution: other people are as imortant as me, in pirncipal, since I am not objectively anything special—but I should concentrate my efforts on myself and those close to me, becuase I understand my and their needs better, and can therefore be more effective.
I don’t think that’s a sufficient or effective compromise. If I’m given a choice between saving the life of my child, or the lives of a 1000 other children, I will always save my child. And I will only feel guilt to the extent that I was unable to come up with a 3rd option that saves everybody.
I don’t do it for some indirect reason such as that I understand my children’s needs better or such. I do it because I value my own child’s life more, plain and simple.
Important to whom?
You might as well have asked: special to whom>? Even if there is no objective importance or specialiness anywhere, it still follows that I have no objective importance ort specialness.
For the record, you do have a limited supply of willpower. I’m guessing those monks either had extraordinary willpower reserves or nonstandard worldviews that made abstinence actually easier than sin.
It seems they practice that willpower muscle very explicitly for hours every day. Abstinence should actually be pretty easy considering you have very little else to drain your willpower with.
If you think so.
Looking into your link now, but it was my understanding that the effect was weaker if the participant didn’t believe in it, not nonexistent (i.e. disbelieving in ego depletion has a placebo effect.)
Wikipedia, Font Of All Knowledge concurrs:
ETA: It seems the Wikipedia citation is to a replication attempt of your link. They found the effect was real, but it only lessened ego depletion—subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.
I’m not sure the word “placebo” makes sense when you are discussing purely psychological phenomena. Obviously any effects will be related to psychology- its not like they gave them a pill.
I … think it’s supposed to be regulated at least partially by glucose levels? So in some of the experiments, they were giving them sugar pills, or sugar water or something? I’m afraid this isn’t actually my field :(
But of course, no phenomenon is purely psychological (unless the patient is a ghost.) For example, I expect antidepressant medication is susceptible to the placebo effect.
See here.
If it isn’t, you’re doing something wrong.
ETA: By which I don’t mean that it is easy to do it right. Practicing anything involves a lot of doing it wrong while learning to do it right.
It seems to me that, even valuing your own life and the lives of others equally, it’s not necessarily inconsistent to pay much more for cryonics than it would cost to save a life by normal altruist means. Cryonics could save your life, and malaria nets could save somebody else’s life, but these two life-savings are not equal. If you’re willing to pay more to save a 5-year-old than an 85-year-old, then for some possible values of cryonics effectiveness, expectation of life quality post-resuscitation, and actual cost ratios, shutting up and multiplying could still favor cryonics.
If this argument carries, it would also mean that you should be spending money on buying cryonics for other people, in preference to any other form of altruism. But in practice, you might have a hard time finding people who would be willing to sign up for cryonics and aren’t already willing/able to pay for it themselves, so you’d probably have to default back to regular altruism.
If you do have opportunities to buy cryonics for other people, and you value all lives equally, then you’ve still got the problem of whether you should sign yourself up rather than somebody else. But multiplying doesn’t say you can’t save yourself first there, just that you have no obligation to do so.
Edit: Since you don’t in terms of your revealed preferences, are you aspiring to actually reach such a state? Would an equal valuation of your life versus a random other life (say, in terms of QALYs) be a desirable Schelling point, or is “more altruistic” always preferable even at that point (race to the bottom)?
Depends on which part of my brain you ask. The altruistic faction does aspire to it, but the purely egoist faction doesn’t want to be eradicated, and is (at least currently) powerful enough to block attempts to eradicate it entirely. The altruist faction is also not completely united, as different parts of my brain have differing opinions on which ethical system is best, so e.g. my positive utilitarian and deontological groups might join the egoist faction in blocking moves that led to the installation of values that were purely negative utilitarian.
Have you read the second paragraph of the comment you’re replying to?
Clarified in grandparent.
I don’t understand this values vs preferred values thing. It sounds like “if I get a chance to go against my actual values in favor of some fictional values, I’ll take it” which seems like a painful strategy. If you get to change your values in some direction permanently, it might work and I would understand why you’d want to change your cognition so that altruism felt better, to make your values more consistent.
This isn’t really different than any other situation where people wish they had a different characteristic than they do. Sometimes such preferences are healthy and benign in the case of other mental states, e.g., preferring to acquire more accurate beliefs. I don’t see any reason to think they can’t be healthy and benign in the case of preferring to change one’s preferences (e.g., to make them more form a more consistent system, or to subordinate them to reflective and long-term preferences).
As I noted to Chris above, consistency isn’t necessarily the highest goal here. The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.
I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
I don’t see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn’t to feel more empathy toward strangers. The goal is to help strangers suffer and die less.
Suppose you see a child drowning in your neighbor’s pool, and you can save the child without incurring risk. But, a twist: You have a fear of water.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that’s your precondition for an interesting or compelling moral argument, then you’re bound to be disappointed.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don’t care at all about your aversion to bodies of water in other contexts (e.g., you aren’t pining to join any swim teams). For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self.
What I’m saying is that if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk although I’m probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
I really don’t understand. Either you are that selfish, or you aren’t. I’m that selfish, but also happily donate money. There’s no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don’t want to take a pill that will turn me into a murderer who loves that he’s a murderer, even though if I took such a pill I’d be happy I did.
If your risk of dying is high enough, then you shouldn’t try to save the child, since if you’re sure to die the expected value may well be negative. Still, I don’t see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. ‘My altruism is limited, and I’m perfectly OK with how limited it is and wouldn’t take a pill to become more altruistic if one were freely available’ is a coherent position, though it’s not one I happen to find myself in.
Then you understand the thing you were confused about initially: “I don’t understand this values vs preferred values thing.” Whether you call hydrophobia a ‘value’ or not, it’s clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think.
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. If you’re inclined to murder people, then it’s unlikely that my saying ‘please don’t do that, it would be a breach of your moral obligations’ is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of ‘bad’ (and of ‘kill’, and of ‘to’...). And it’s bad to strongly desire to kill people; and it’s bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
People aren’t any level of selfish consistently; they exhibit more selfishness in some situations than others. Kaj’s argument is that if I prize being altruistic over being egoistic, then it’s reasonable for me to put no effort into eliminating my aversion to cryonics, even though signing up for cryonics would exhibit no more egoism than the amount of egoism revealed in a lot of my other behaviors.
‘You ate those seventeen pancakes, therefore you should eat this muffin’ shouldn’t hold sway as an argument against someone who wants to go on a diet. For the same reason, ‘You would spend thousands of dollars on heart surgery if you needed it to live, therefore you should spend comparable amounts of money on cryonics to get a chance at continued life’ shouldn’t hold sway as an argument against someone who wants above all else to optimize for the happiness of the whole human species. (And who therefore wants to want to optimize for everyone’s aggregate happiness.)
I’d love to see someone try to pick units with which to compare those two values. :)
You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren’t in perfect alignment with yours. Because of that paragraph, it’s pretty difficult for me to look at anything else you said rationally.
Here’s some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.
I’m not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I’m really more into the Socratic approach than the Crusader approach.) I’m making two points about the structure of utilitarian reasoning:
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects. It’s not a bold or interesting claim; I could equally well have said ‘it’s good for polar bears to have preferences that cause them to do good things’. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are.
I’ll also say (equally non-facetiously): I don’t endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren’t exactly 100% the person you wish you were. Those aren’t good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don’t have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.
Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
I don’t think I voluntarily do bad things according to my values, ever. I also don’t understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past.
Other people do bad things according to my values, but if their actions are truly voluntary and I can’t point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind.
No.
To me that seems like you’re ignoring what’s normally persuasive to people out of plain stubbornness. The reason I’m bringing this up is because I have altruistic goals too, and I find such talk damaging to them.
Having ideals is fine if you make it absolutely clear that’s all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don’t make or break a good person.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
It seems various things are meant by egoism.
Begins with “Egoism can be a descriptive or a normative position.”
It’s also a common attack term :-/
I better stop using it. In fact, I better stop using any label for my value system.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.
In Yvain’s liking/wanting/endorsing categorization, “preferred values” corresponds to any values that I approve of. Another way of saying it would be that there are modules in my brain which execute one set of behaviors, whereas another set of modules would prefer to be engaging in some other set of behaviors. Not really different from any situation where you end up doing something that you think that you shouldn’t.
If you approve of these values, why don’t you practice them? It seems to me that approving of a value means you want others to practice it, regardless of whether you want it for yourself.
Did I say I don’t? I’m not signed up for cryonics, for instance.
I mean valuing people equally.
Yes, that’s what my above comment was a reference to. I do my best to practice it as well as I can.
It seems to me you’re looking for temporal consistency. My problem understanding you stems from the fact that I don’t expect my future self to wish I had been any more altruistic than I’m right now. I don’t think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain’s descriptions fit this picture.
I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it’s more a “I’m doing this thing even though I know it’s wrong” thing: not a conflict between one’s current and future self, but rather a conflict between the good of myself and the good of others.
Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don’t actually want to do.
Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I’m doing.
Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body.
Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can’t expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain.
I can think of several reasons why I’d want to assume a conflicted identity and almost all of them involve signalling and social convenience.
I think the difference here is that, from the inside, it often doesn’t feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I’d have been paralyzed with inaction. But when I’m considering some decision that I’m conflicted over, it very literally feels like there’s an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn’t resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.)
Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn’t really resolved in any sense—if the circumstances were to change and I’d have to make the same decision again, the loser of this “round” might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all.
That’s also why it doesn’t seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated “wanting” with “the desire of the brain-faction that happens to be the strongest at the time when one’s brain is sampled”, then you could kind of frame it like a temporal conflict… but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it’s just the relative strength of those wants that varies.
Ok. To me it most often feels like I’m observing that some parts of my brain struggle and that I’m there to tip the scales, so to speak. This doesn’t necessarily lead to a desirable outcome if my influence isn’t strong enough. I can’t say I feel conflicted about in what direction to tip the scales, but I assume this is just because I’m identifying with a part of my brain that can’t monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don’t remember what it felt like, nor would I be able to tell you how this transformation might have happened.
This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you’re X that’s trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn’t want to do?
Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience:
I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision.
I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle.
I could tip the scale if I just decided what direction I wanted to tip them in, but I’m genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the “carriers” of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn’t feel quite right, but it’s the best I can manage right now.)
I now realize that I hadn’t previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I’ll have to rethink what I’ve said here in light of that.
I think that I identify with each brain-faction that has managed to “install” “its” preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don’t identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.
It means different “modules” of your mind have different values, and on reflection you favor one module over the other.
Part of why this still sounds problematic is that we have a hard time unravelling the “superego” (the metaphorical mental module responsible for enforcing nonselfish/pro-social values) from full and complete moral cognition. Thus, many people believe they believe they should be selfless to the point of self-sacrificing, even though, if you cloned them and actually made the clone that selfless, they would not endorse the clone as being a superior version of themselves.
I don’t remember any non-crazy cryonics advocate ever saying otherwise.
I think the scale on which it is done is the main thing here. Currently, cryonics is performed so infrequently that there isn’t much infrastructure for it. So it is still fairly expensive compared to the amount of expected utility—probably close to the value implied by regulatory tradeoffs ($5 million per life). On a large, industrial scale I expect it to be far better value than anything Givewell is going to find.
This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.
If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).
At some point, you will have to specialize in cryobiology and neuroscience (with some information science in there too) in order to process the data. I can understand wanting to see the data for yourself, but expecting everyone to process it rationally and in depth before they get on board isn’t necessarily realistic for a large movement. Brian Wowk has written a lot of good papers on the challenges and mechanisms of cryopreservation, including cryoprotectant toxicity. Definitely worth reading up on. Even if you don’t decide to be pro-cryonics, you could use a lot of the information to support something related, like cryopreservation of organs.
Until you have enough information to know, with very high confidence, that information-theoretic death has happened in the best cases, you can’t really assign it all a $0 value in advance. You could perhaps assign a lower value than the cost of the project, but you would have to have enough information to do so justifiably. Ignorance cuts both ways here, and cryonics has traditionally been presented as an exercise in decision-making under conditions of uncertainty. I don’t see a reason that logic would change if there are millions of patients under consideration. (Although it does imply more people with an interest in resolving the question one way or another, if possible.)
I don’t quite agree that the value would be zero if it failed. It would probably displace various end-of-life medical and funeral options that are net-harmful, reduce religious fundamentalism, and increase investment in reanimation-relevant science (regenerative medicine, programmable nanodevices, etc). It would be interesting to see a comprehensive analysis of the positive and negative effects of cryonics becoming more popular. More organs for transplantation could be one effect worth accounting for, since it does not seem likely that we will need our original organs for reanimation. There would certainly be more pressure towards assisted suicide, so that could be positive or negative depending how you look at it.
This just shifts the question to whether promoting cryonics is an effective form of general consequentialist do-gooding. There are a lot of factors to consider, in regards to large-scale cryonics:
Effects on funding/enthusiasm for new technologies due to alignment of incentives.
Effects on mitigation of existential risks, long-term economic policies, and investments.
How much cheaper it gets when practiced on a large industrial scale.
How much more reliable it becomes when practiced on a large industrial scale.
Displacement of wasteful funeral practices.
Displacement of wasteful end-of-life medical practices.
Reduced religious fundamentalism, due to less belief in innate immortality.
Reduced luxury purchases due to altered time preferences.
Relative number of people who could be saved by cryonics but not by any other available technology.
There are some plausible negatives effects to consider as well:
A larger industry has more opportunities for corruption and mistakes, so it would probably be more regulated on a larger scale, resulting in higher administrative costs and restrictions on experimentation.
People might be less concerned with preventing some health problems (while being more concerned with others, including traffic fatalities and heart disease) as the result of risk compensation.
The pressure to cure diseases in the short term could be reduced. Some patients with terminal cases might decide to die earlier than they otherwise would (which would turn out to be permanent if cryonics fails to work for them).
However, the costs aren’t likely to outweigh (or even significantly approach) the savings and benefits in my estimation. In many cases the apparent negatives (e.g. people checking out early, or reducing the overt pressure on scientists to cure cancer ASAP) could be a blessing in disguise (less suffering, less bad data). The regulation aspect probably actually benefits from cryonics being a larger and more visible industry, as the alternative is for regulations on the topic to be passed by non-sympathetic outside industries such as death care (funeral directors associations) and tissue banking (nontransplant anatomical donation organizations).
As it stands, LN2 costs of storage are fairly minimal (around $10/year per neuro patient, going by CI figures, or $1000 per patient assuming 1% interest on a long-term deposit), and can be dramatically reduced by larger scale storage spaces. Most of the money is going into standby, administration, equipment, and so forth, which are also likely to be a) scale friendly and b) already heavily invested in by the conventional medical community.
There’s also the long-term financial services aspect. A large chunk is going into long-term savings / low-risk investment. My understanding is that this promotes economic growth.
The funds set aside for cryonics reanimation will eventually go to medical research and infrastructure to reanimate patients. This could take more than one form. Programmed nanorepair and/or uploading are the currently expected forms for today’s patients, but that expectation does not necessarily hold for all future forms of cryonics. We might, at some point in the next few decades, reduce the brain damage factor to a point where biologically based regenerative techniques (tissue printing, stem cells, synthetic microbes, etc.) are plausible enough on their own. These technologies, or at least the basic science needed to achieve them, would obviously have uses outside the domain of cryonics.
So the direct and indirect results of cryonics seem to me to be good enough that a non-hypocritical EA might plausibly think it is a good idea to promote by whatever means they can. Signing up for it oneself might be useful to boost the credibility of discussing it with friends, especially if you have a social group that includes wealthy people who might donate to cryonics research or assist the transition to a larger infrastructure down the road somewhere. The question is whether this can beat something less expensive like an Adwords campaign of equivalent ongoing cost (say $80/month).