Rapture/Pet Insurance
http://eternal-earthbound-pets.com/Home_Page.html
Providing assurance that pets will be provided for in the event of Rapture.
Having thought it over, I’m OK with the ethics of this service.
http://eternal-earthbound-pets.com/Home_Page.html
Providing assurance that pets will be provided for in the event of Rapture.
Having thought it over, I’m OK with the ethics of this service.
I have told one of my Mormon friends that in the event that she outlives me, it is OK with me for her to baptize me after my death in case I find myself to be dead and surrounded by spirits informing me that the Mormons were right all along. I don’t expect this to happen. I expect that if she outlives me, I will be immersed in liquid nitrogen without consciousness of any kind, and she’ll show up at a temple and spend however long making herself feel better about this fact. But if the Mormons were right, I’d be glad of the assist, since baptism posthumous or otherwise appears to be a prerequisite for some of the best goodies.
This sort of arrangement—differing beliefs, but agreement conditional on either belief being true—allows for slightly weird but ethically harmless non-monetary transactions like that. (I assume no one would have trouble with: a Rapture believer who was friends with an atheist getting that atheist to agree to take care of eir pets in the event of Rapture, for free.)
Incorporating money makes it weird. But if it’s legal and ethical to give something away, it’s usually legal and ethical to sell it. The exceptions are, as far as I know, all obviously stupid legislative artifacts (prostitution); or prohibited to protect the seller, not the prospective buyer (organs); or things considered by the legislating body to be outright harmful, as opposed to just useless, to have/use (drugs). A contract stating that someone will look after your pets if you are Raptured is none of the above.
The presence of Mormonism, both direct (in the form of Mormons talking about Mormonism) and indirect (in the form of non-Mormons mentioning Mormonism), is surprisingly (to me) high in this forum, in comparison to other religions. I’m wondering if there is an explanation for this. Maybe it’s just by chance.
Mormons do have an interestingly transhumanist view of the afterlife. You get to become a god! And have your own universe to run! And learn things forever! I once got one to assent to the statement that pie would grow on trees during the Millenium.
But yeah, I met two Mormons independently before I ever encountered LW and they introduced me to a bunch more. And they’re so nice. They’re pretty easy to keep as friends if you try at all.
Is Mormon theological consensus that you get a whole universe rather than one planet per Mormon transhumanist God?
I don’t think so. (The person I asked wasn’t really sure, but she seemed to think it was probably a universe.)
Per one of Niven’s Laws (“There is no cause so right one cannot find [an evil person] following it”), there ought to be at least a few Mormons out there who are total jerks. I suspect that they keep them hidden away, though. ;)
There is a level of self selection taking place as the behaviours and characteristics generally held by practicing Mormons (Altruism, family/community focused behaviour, and the self control necessary to abstain from coffee, alcohol, sex.) are inversely correlated with those of ‘Total jerks’ (aggression, self centered behaviour and poor impulse control). So one would expect a low overlap between those two groups.
Mormons are allowed to have chocolate.
Sorry my mistake i’ll edit that, checking the LDS website tells me its debated, but uncommon. I suppose that would be a far too grim life for even the most pious mormon :P
Yeah, I think the ones I know would agree with you about that :P
I would not want people to money-pump me from my delusions, even if it were profitable to them, and even if it were a very bad delusion.
Edit: On second thought, I do want people to sell me things, even when they disagree with me on the value of the good. So I admit, I don’t immediately have an answer for how to distinguish this from selling someone a bet in a prediction market where you have vastly different beliefs, or even from the ordinary buying-an-apple.
But I still think there’s a line, and this clearly crosses it.
Edit 2: To further clarify my intuitions here, I see the potential customers as people who have “contracted a minor case of reason”, for lack of a better term. They seek enough logical closure over their beliefs to care about their pets after a rapture. That’s evidence that their reflective equilibrium does not lie in going whole hog with the rapture thing, and it is not a mere case of weird preferences, but preferences you can expect to change in a way that will regret this purchase. This puts its closer to the dark arts/akrasia-pump/fraud category.
I ran into a guy handing out Rapture pamplets today. I hadn’t actually met one in real life and felt I should somehow take advantage of the situation. I asked him some questions to see if he’d put his money where his mouth was (i.e. “have you donated your money to charity?”). His response was “oh well the world’s ending so it doesn’t matter what I do with my money so I’m not doing anything.
I left mildly disappointed. It occurred to me later I could have had said something along the lines of “hey, since you think handing out pamphlets is a good use of your time, and you don’t expect to need your money, why not hire a bunch of homeless people to help hand them out? It’ll give them some decent food for their last two days, and help get the word out.”
Granted, I’m not sure that any of that would have been moral or useful. But something along those lines appeals to me for some reason.
Let me put it this way: if you really want to satisfy the rapture believers’ goals in a non-exploitative way, why not just do the pet service insurance in exchange for a share of their estate in the event of he rapture?
Remember, folks, whenever you see yourself “tough-mindedly” agreeing to something that also happens to be self-serving, you’re obligated to follow these steps:
1) Look for a less problematic middle way.
2) When you find it, advocate that instead.
Easy steps, but rarely done.
In the event of Rapture, the value of real estate would plummet as it’s suddenly vacated by all believers. And of course, so would the value of money. In a Christian majority country, those who remained would have all the real estate they could defend.
If the person who set this up has an actual network with maintenance costs, providing insurance in exchange for promises of real estate in the event of rapture would be an act of pure altruism, providing others with service at their own expense. While I’ll applaud altruistic behavior, I’m not going to condemn what seems to me to be a legitimate example of wealth producing capitalist behavior. The buyers value peace of mind more than the money, and the provider values the money more than the time and work they put into the services.
Maybe my standards have been lowered recently by dealing with industries whose externalities render them wealth negative, but this doesn’t look objectionable to me.
A person’s estate =/= only real estate
Misread that, but as I already noted, the value of money would also plummet, and society would probably at least temporarily devolve into anarchy, through which you probably wouldn’t be able to hold your claim on any part of their estate. So the correction doesn’t really bear on any of the comment’s points.
If you have to add an extremely critical, tenuous assumption about the fraction of people that the buyers of this insurance believe would disappear in rapture, yeah, it does bear on your comment’s points—all of them, as far as I can see.
Even if we suppose that a much smaller fraction, let’s say 10%, disappear in the Rapture, it would still be a grand scale economic catastrophe.
Assuming we assign a negligible probability to the Rapture ever happening at all, then as I already stated, that would mean that the person setting up the insurance policy is putting in time and work, ensuring others’ peace of mind, for no compensation at all. This is the most salient of the points I raised, and any specifics of what would occur in a hypothetical Rapture event are irrelevant to it.
By this reasoning, shouldn’t a life insurance company be willing to sell you life insurance and defer the premium paid until you die?
Yes, an insurance company should be allowed to take, as payment, a portion of your estate when you die. One instance of this is known as a “reverse mortgage”.
But there’s not much point to buying insurance if they payout comes from your own assets.
I would want people to sell me things, even if they consider my beliefs so utterly silly as to be a delusion.
I would also want them to try and persuade me to give up any delusions they thought I held, but in the event that fails I would rather they not decide that I am not sane enough to deserve to buy from them.
I agree with you that this feels like exploitation. However, I’ve seen several references on this site, though not lately, to the desirability of betting play money or small amounts of real money on one’s beliefs, as a way of “keeping oneself honest.” I like this principle and engage in small bets with other people on occasion. Is your problem here with the amount of money changing hands, with the massive advantage to one party, or with the concept of betting on beliefs in general?
I have a problem with taking advantage of people based on their delusions. If a dude thought he was Napoleon and was entitled to Napoleon’s old swords, would you:
1) Offer to sell him fake swords that you claim to be Napoleon’s (assume he wouldn’t know the difference)?
2) Attempt to explain to him that he’s not Napoleon and he shouldn’t be seeking the swords?
3) Find him professional help?
4) Do nothing?
How do you differentiate a delusion from a belief?
See update.
The update is to answer one, making it explicit that the swords are known fake and you’re assuming that the guy won’t be able to figure that out, right?
This seems not to be relevant to the original issue, assuming that in the case of a rapture the atheists who are signed up with this service actually do the promised pet care. In any case it doesn’t answer my question—if you’re saying that beliefs are one thing and delusions are another, and it’s okay to bet on beliefs but not on delusions, how do you decide whether something is a belief or a delusion?
LCPW: You tell him they’re fake and he disagrees, insisting that they are real. Now what?
In other words, you want me to give you a computable classifier that makes sufficient distinctions to resolve a moral dilemma. Sorry, I can’t do that. No one can.
What I can do is articulate my intuition on this enough to show where the fuzzy line is and why I think something falls beyond the fuzziness. If there are specific distinctions you think I should or shouldn’t be making, which would push this back to or past the fuzzy boundary, I’m quite interested in hearing it.
Personally? I’d sell him the swords, assuming there were no other issues. (I might decline to sell them to him on the grounds that he might try to hurt someone with them, but in that case what am I doing selling swords in the first place?)
So do that?
I note that it wouldn’t be inaccurate to rephrase my question as a request for you to taboo ‘delusion’.
Wow, real classy, folks. AdeleneDawner posts that she’d take advantage of a delusional person, I scoff at the nobility of doing so, and I get modded down −5, with no explanation, while no one disagrees with her? Pat yourselves on the back, before you return to pulling the wings off birds.
You scoff with no argument while failing repeatedly to provide clarification that AdeneleneDawner is asking for. You haven’t done what you said you were trying to do, show why you think an action falls beyond the fuzzy line, but you’ve insulted others for a perceived crossing of it.
Why not sell the guy the swords? You’re taking it as obvious, but I don’t think it’s clear to others (it’s certainly not clear to me) what your reasoning is. The guy is probably schizophrenic, and so can’t be reasoned out of his delusions. He might be treatable with medication, but if he doesn’t have a legal guardian, nobody can force him to receive treatment as long as he’s staying on the right side of the law and isn’t a danger to himself or others. Plus, schizophrenia frequently does not respond to treatment, and if he’s able to take care of himself, full time treatment would probably do a great deal of harm to his quality of life, so the expected utility of having him institutionalized would probably be negative.
If you don’t have an argument for why selling him the swords leads to less utility, perhaps this is a moral intuition that should be dispensed with.
Duping someone with clear mental disabilities into giving you their money is a canonical case of something that normal people regard as wrong. Regression to that canonical case, combined with the appeal to subjuctive reciprocity (“Golden Rule” here) was my argument. When someone’s morality permits them this much, there’s not much more I can offer, as any obvious common ground is clearly missing, and efforts to cover the gap not worth my time.
I would not want someone to so milk me out of my money if were schizophrenic, even and especially if they could come up with tortured rationalizations about how they deserve the money more, and gosh, I’m an adult and all so I had it coming. The Golden Rule intuition (again, a special case of subjuctive reciprocity, not far from TDT) needs a much better justification for me to abandon it than “hey! I’m missing out on schizophrenics I could money pump”, so no, that alternative does not immediately suggest itself to me.
A large portion of the population considers cryonics to be a delusion. Would you support a law preventing cryonics institutions from “duping” people out of their money? If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
You, like me, should be seeking to identify where you draw the line, and asking that I give more specificity in the line I draw than in the line you draw is a clear sign you’re not taking this seriously enough to justify a response.
When you’re ready to re-introduce some symmetry, I’ll be glad to go forward. You can start with a (not necessarily computable) description of where you would draw the line.
If the person running the Rapture pet insurance does not actually have the resources to take care of the pets in the event of Rapture, or would not honor their commitment if it took place, then yes, that would be the appropriate analogy, and I would agree that it’s dishonest. But if a person who doesn’t believe cryonics will work still puts in their best effort to get it to work anyway, to outcompete the other institutions offering cryonics services, then do their beliefs on the issue really matter?
Can you point out where you did so? I haven’t noticed anything to this effect.
I would personally draw the line at providing services that people would, on average, wish in hindsight that they had not spent their money on, or that only people who are not qualified to take care of themselves would want. If large numbers of people signed up for the service on a particular day, and when it didn’t occur on that day, discarded their belief in an imminent rapture, then the insurance policy would meet the first criterion, but most people who believe in an imminent rapture and pass an expected date simply move on to another expected date, or revise their estimate to “soon,” which would justify continued investment in the policy.
Counterfactual reciprocity can be defeated by other incentive considerations. Otherwise I would have to oppose all criminal punishment, since I certainly wouldn’t want to be punished in the counterfactual scenario where I had done something society regarded as bad. (As a matter of fact, my intuition does wince at the idea of putting people in prison etc.)
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
It isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”? In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
Appealing to moral intuitions isn’t inherently deontological—it’s how you identify common ground without requiring that your opponent follow a specific mode of reasoning. And the fact that the people disagree with me are deontologists—with contorted moral calculuses that “coincidentally” let them do what they wanted anyway—is precisely why their disagreement so shocked me!
There is irony, though, but it lies in how Adelene finds it more morally repugnant to call something “lame”—because of the harm to people with disabilities—than to actually money-pump a person with a real disability! (Does someone have a cluestick handy?)
No; my argument is analogous to the following, from the same link:
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure. The upside seems pretty clear to me—in fact this is just a kind of prediction market.
You think people who bet poorly on prediction markets deserve to lose their money. You consider this to be a species of prediction market. Ergo, you believe that people who believe in the rapture deserve to lose their money.
Which? (Note: “I can spend money stolen from stupid people better than they can” isn’t so much a “cost benefit calculation” as it is the “standard thief’s rationalization”.)
That’s a good point. However,
Premise denied. As a consequentialist, I don’t believe in “desert”. I don’t think criminals “deserve” to go to prison; I just reluctantly endorse the policy of putting them there (relative to not doing anything) because I calculate that the resulting incentive structure leads to better consequences. Likewise with prediction markets: for all that the suffering of indiviudals who can’t tell truth from falsehood may break my heart, it’s better on net if society at large is encouraged to do so.
I see it differently: as evidence that they actually believe their beliefs, in the sense of anticipating consequences, rather than merely professing and cheering.
Yes, but only provided their beliefs became accurate. And being willing to cause them regret in order to incentivize accurate beliefs is the whole point.
Right, you just believe in concepts functionally isomorphic to desert, purport having achieved a kind of enlightened view by this superficial disavowal of desert, and then time-suck your opponents through a terminology dispute. Trap spotted, and avoided.
Seriously, have you ever considered being charitable? Ever? Such as asking what data might have generated a comment, not automatically under the assumption that your interlocutor (notice I didn’t say “opponent”) is interested only in signaling enlightenment?
This is Less Wrong. In case you forgot.
The point of the paragraph whose first two sentences you quoted was to answer your question about what the cost-benefit calculation is, i.e. what the purpose of pet-rapture-contracts/prediction markets is. The benefit is that accurate beliefs are encouraged. It isn’t (noting your edit) that those who start out with better beliefs get to have more money, which they can make better use of. (That could potentially be a side benefit, but it isn’t the principal justification.)
How do you know she didn’t just get more upvotes than downvotes?
As an extreme example: if one person gets 105 upvotes and 100 downvotes, and another gets 100 upvotes and 105 downvotes, that’s not much of a difference.
Don’t assume too much about votes. Some are probably from people who think you can do better. You show a lot of self awareness by describing yourself as “scoff[ing]”, you’re just a small step away from upvoted comments expressing the same opinion of yours on this same topic.
Real noble of you.
See second update.
Pet insurance is none of these. It is not a “fake sword.” The insurer is taking real steps to ensure the pets are being taken care of in the event of a rapture. Further, the insurer is explaining what those steps are to the insured so that this person can reach an intelligent conclusion about the reliability of the insurer (known in the field as counter-party risk).
I’m sure the rapturees think offering dead head freezing services to “rationalists” is a fake sword. In the case of rationalists, these services can be offered by other rationalists. In the case of pet rapture insurance, it will be the unraptured who must be relied upon for any such service.
So what steps have they taken to ensure that their offer of insurance meets the same standards as other insurance? Are they telling potential insureds that they are being given a bigger computational burden in ascertaining e.g. counter-party risk than is allowed to consumers in even the most trivial kinds of insurance?
http://eternal-earthbound-pets.com/Home_Page.html You’d be better off seeing what they say to their potential customers than having me paraphrase it.
Whether you think this insurance should be regulated or not seems a side-issue to whether you think it is immoral to offer it or not. Why needlessly complicate the issue? If it was regulated would you then think it was moral?
Your position is extremely anti-correct reasoning, and the opposite of what I expected from the data my robot collected on meeting you.
I want to exchange trivial amounts of USD for large amounts of paperclips from humans (to be moved to the safe zone), thereby paperclip-pumping them. Humans regard this as acceptable, and even encourage it of me, despite their non-appreciation of the merit of paperclips being an extreme delusion on their part.
Humans would feel that their rights were infringed if my USD-for-future-paperclips were voided by a superintelligence on the grounds that preferring USD to paperclips “is a delusion”. Humans of type “rapture:believe” and “pet:care” should be afforded the same rights.
Incidentally, is anyone willing to give me 1e10 kg of small paperclips in ten years in exchange for 1000 USD now, or some other offer of similar or greater favorability to my values?
Are you really in a position to be making new deals when you’re only 1/50th of the way towards fulfilling our arrangement?
Also you should be very suspect of such a short delivery timeframe, that’s before all society is supposed to explode with lack of scarcity.
This phrase confuses me.
If people have highly money-pumpable delusions that they can be money-pumped in a completely honest and open fashion then the individual is likely so irrational that the resources will be much better used when transferred to the one doing the pumping.
The IQ-100 people are entitled to all the wealth of the mentally-retarded?
I haven’t made any any assertion about entitled or not but it is clear that they will generally use the money more efficiently. That’s in fact why we often have guardians who handle the finances of the mentally challenged.
You agreed with the goodness of the outcome, which is close enough in this context.
And every one of these guardians is capable of duping their wards of all (rather than their permitted portion of) the money they might earn at jobs and such. We don’t let them do that, though. Because that would be wrong. Yes, even though they can find better places to spend the money.
How can you not see the difference between defrauding someone out of their money, and selling something they want and describing correctly and clearly what you are selling to them?
Differences in belief make the world go round. They are a feature of the human group-mind, a way fo that group mind to try many MANY hypotheses and many MANY policies and see how well they work out. The entire part of investing which consists of traders trading against other traders consists of each side exploiting what they think are the delusions and errors of the other side. The net effect of having these markets is that capital is moved towards more efficient uses.
Meanwhile, if I want to pay a cryogenics expert to freeze me even if she isn’t getting herself frozen because she thinks it is stupid and that I am wrong, then that is my business and her business, and it would take quite a butt-insky to get between us in that transaction.
Er no. I said that the resources would be better used. That may have been poorly phrased. Whether the resources will be used more efficiently doesn’t mean that we let one do it, mainly because there are negative secondary effects from letting people just take resources from each other when they think it will work better and also because we have deepseated notions of property rights as a separate moral good.
Understood. Try not to change topics next time.
I’m confused. I thought “pump” was a word used in dutch-book type arguments, where (to continue the analogy) the guy wanted to sell swords for less than he wanted to buy them for.
That is a different ethical situation, because at the end of the day you end up with both the swords and the money.
This is how I had understood “pump” in this general context.
in this context Silas is using pump in a slightly more general notion of a metaphorical pump, in that one can pump money or other resources away from someone.
That seems suspicious to me. Granted categorizations are imprecise, someone extending a word’s meaning to a borderline case that has heretofore not been referred to by the term because that case is less problematic might be trying to sneak in the more problematic connotations.
Even when the word had already covered that meaning, using a word gives an impression and sets one up emotionally for an average use of it.
E.g., someone saying “Let me tell you about a famous war criminal of the twentieth century,” who went on to discuss former President Clinton would be doing something objectionable no matter how well they proved he was technically a war criminal. He’s not an averagely evil or criminal one (if he’s evil at all), so using the term misleads the careless and casual reader.
(This would not always be true, in the example above it might be valid in an argument criticizing laws of war or international law.)
How is this different from any other kind of insurance? If I live in a landlocked, arid country fifty metres above the water table, but I think an extreme weather event is about to occur, what’s wrong with an insurance company insuring me against flood damage?
$135 to look after a domesticated animal for the rest of its natural life is an absolute bargain.
Edit: Not that I mind the downvote, but it’s a legitimate question.
Well, to start:
they haven’t invested the premiums in a long-term portfolio that will be drawn from to pay for future pet care,
they haven’t set up a trust that will insure the fund will be used to pay for pet care in the event the insurance founders are no longer alive,
they have no actuarial derivation of the expected cost of underwriting this insurance (including the economic collapse risk Desertopa keeps bringing up), and
they have no external accreditation or oversight of their ability to honor their obligations.
(Note: the cryonics orgnizations that people keep comparing this to get the opposite result on all these checks.)
Good answers, one and all, but they all seem to essentially revolve around competence and capacity to deliver.
Hypothetically, if a respected and mainstream insurance company offered a similar sort of service, would you find it as problematic?
Probably not; I would, however, be interested in #3.
Not that I mind the downvote, but it’s a legitimate question.
For similar reasons, I disapprove of laws requiring owners to buy insurance to cover dog bites by their dog, if and only if it is of a certain breed (pit bull, Rottweiler, etc.).
If other breeds are comparatively harmless, their insurance should simply be less. Breed-neutral insurance laws would also be more responsive than legislatures to tracking signs of risk (such as having an unneutered dog) and newly popular fighting breeds, etc.
I agree, insurance for rare events ought to be cheap.
Perhaps I have a different system of morality than other people who have commented on this topic, but I personally judge actions as “moral” or “immoral” based on the intentions of the do-er rather than the consequences. (Assuming morality is relative and not an absolute component of the universe, this seems like a valid moral system.)
If the atheists who run this website are doing so to make money by exploiting the perceived stupidity of their customers, this seems immoral to me. On the other hand, if they are running the service because they honestly want to increase the peace of mind of rapture-believing-in pet owners, then that seems like it would be a moral action. However, knowing people, I really suspect that it’s the former.
If the rapture really does happen and this really saves pets (assuming that it is a good thing to save pets), then I would still consider this service immoral. I would rather live in a world where people were compassionate enough that they did not to want to trick each other for money (even if they thought each other’s beliefs were moronic). Barring that, I’d like to live in a world where people consider tricking each other for money immoral and wouldn’t do it because of some internal moral crisis or external punishment. I hold this opinion even if some of the tricks for money backfire and end up benefiting the trickees more than the trickers.
In general, I do not think that when several different motivations impart desires to do several different things, and it happens to be physically impossible to fulfill all those desires because it is impossible to do all those things, it makes sense to talk about conflicting emotions. There is no conflict, as there would be if I were a perpetual motion machine or something violating the laws of physics. Each action I take is done under the influence of all of my emotions and motivations.
This is even more true when different impulses give desires that are fulfilled by the exact same action.
It does not fit with my view of human nature to say that a human, who has both the altruistic and petty motives available mentally and in close emotional proximity, does something only because of one desire and not the other.