“then rationality doesn’t demand that I stop spending money on myself in order to be good.” Well, yes, because whether you’re “being” good is somewhat irrelevant. Objective conditions of the world don’t change based on what you’re “being” ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
“you’ve ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). ” Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn’t. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
“LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn’t believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA.” Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so “in practice” X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol’ egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
“You just said that you doubt that “more than a third” of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn’t find that one third of vegetarians are LW-rationalists, or 1⁄3 of atheists, for instance, even though those are popular positions here.” I feel like you’re making a pretty elementary subset error there…
“Ther very fact that you’re asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you’d just avoid cryonics immediately.” No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
Happy New Year.