Duping someone with clear mental disabilities into giving you their money is a canonical case of something that normal people regard as wrong. Regression to that canonical case, combined with the appeal to subjuctive reciprocity (“Golden Rule” here) was my argument. When someone’s morality permits them this much, there’s not much more I can offer, as any obvious common ground is clearly missing, and efforts to cover the gap not worth my time.
I would not want someone to so milk me out of my money if were schizophrenic, even and especially if they could come up with tortured rationalizations about how they deserve the money more, and gosh, I’m an adult and all so I had it coming. The Golden Rule intuition (again, a special case of subjuctive reciprocity, not far from TDT) needs a much better justification for me to abandon it than “hey! I’m missing out on schizophrenics I could money pump”, so no, that alternative does not immediately suggest itself to me.
A large portion of the population considers cryonics to be a delusion. Would you support a law preventing cryonics institutions from “duping” people out of their money? If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
Would you support a law preventing cryonics institutions from “duping” people out of their money?
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
You, like me, should be seeking to identify where you draw the line, and asking that I give more specificity in the line I draw than in the line you draw is a clear sign you’re not taking this seriously enough to justify a response.
When you’re ready to re-introduce some symmetry, I’ll be glad to go forward. You can start with a (not necessarily computable) description of where you would draw the line.
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
If the person running the Rapture pet insurance does not actually have the resources to take care of the pets in the event of Rapture, or would not honor their commitment if it took place, then yes, that would be the appropriate analogy, and I would agree that it’s dishonest. But if a person who doesn’t believe cryonics will work still puts in their best effort to get it to work anyway, to outcompete the other institutions offering cryonics services, then do their beliefs on the issue really matter?
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
Can you point out where you did so? I haven’t noticed anything to this effect.
I would personally draw the line at providing services that people would, on average, wish in hindsight that they had not spent their money on, or that only people who are not qualified to take care of themselves would want. If large numbers of people signed up for the service on a particular day, and when it didn’t occur on that day, discarded their belief in an imminent rapture, then the insurance policy would meet the first criterion, but most people who believe in an imminent rapture and pass an expected date simply move on to another expected date, or revise their estimate to “soon,” which would justify continued investment in the policy.
Counterfactual reciprocity can be defeated by other incentive considerations. Otherwise I would have to oppose all criminal punishment, since I certainly wouldn’t want to be punished in the counterfactual scenario where I had done something society regarded as bad. (As a matter of fact, my intuition does wince at the idea of putting people in prison etc.)
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
It isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”? In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
Appealing to moral intuitions isn’t inherently deontological—it’s how you identify common ground without requiring that your opponent follow a specific mode of reasoning. And the fact that the people disagree with me are deontologists—with contorted moral calculuses that “coincidentally” let them do what they wanted anyway—is precisely why their disagreement so shocked me!
There is irony, though, but it lies in how Adelene finds it more morally repugnant to call something “lame”—because of the harm to people with disabilities—than to actually money-pump a person with a real disability! (Does someone have a cluestick handy?)
LIt isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”?
No; my argument is analogous to the following, from the same link:
“Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.”
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure. The upside seems pretty clear to me—in fact this is just a kind of prediction market.
No; my argument is analogous to the following, from the same link: …The upside seems pretty clear to me—in fact this is just a kind of prediction market.
You think people who bet poorly on prediction markets deserve to lose their money. You consider this to be a species of prediction market. Ergo, you believe that people who believe in the rapture deserve to lose their money.
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
Which? (Note: “I can spend money stolen from stupid people better than they can” isn’t so much a “cost benefit calculation” as it is the “standard thief’s rationalization”.)
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure.
That’s a good point. However,
I see the potential customers as people who have “contracted a minor case of reason”, for lack of a better term. They seek enough logical closure over their beliefs to care about their pets after a rapture. That’s evidence that their reflective equilibrium does not lie in going whole hog with the rapture thing, and it is not a mere case of weird preferences, but preferences you can expect to change in a way that will regret this purchase. This puts its closer to the dark arts/akrasia-pump/fraud category.
You think people who bet poorly on prediction markets deserve to lose their money
Premise denied. As a consequentialist, I don’t believe in “desert”. I don’t think criminals “deserve” to go to prison; I just reluctantly endorse the policy of putting them there (relative to not doing anything) because I calculate that the resulting incentive structure leads to better consequences. Likewise with prediction markets: for all that the suffering of indiviudals who can’t tell truth from falsehood may break my heart, it’s better on net if society at large is encouraged to do so.
I see the potential customers as people who have “contracted a minor case of reason”, for lack of a better term. They seek enough logical closure over their beliefs to care about their pets after a rapture. That’s evidence that their reflective equilibrium does not lie in going whole hog with the rapture thing,
I see it differently: as evidence that they actually believe their beliefs, in the sense of anticipating consequences, rather than merely professing and cheering.
and it is not a mere case of weird preferences, but preferences you can expect to change in a way that will regret this purchase
Yes, but only provided their beliefs became accurate. And being willing to cause them regret in order to incentivize accurate beliefs is the whole point.
Premise denied. As a consequentialist, I don’t believe in “desert”.
Right, you just believe in concepts functionally isomorphic to desert, purport having achieved a kind of enlightened view by this superficial disavowal of desert, and then time-suck your opponents through a terminology dispute. Trap spotted, and avoided.
Seriously, have you ever considered being charitable? Ever? Such as asking what data might have generated a comment, not automatically under the assumption that your interlocutor (notice I didn’t say “opponent”) is interested only in signaling enlightenment?
This is Less Wrong. In case you forgot.
The point of the paragraph whose first two sentences you quoted was to answer your question about what the cost-benefit calculation is, i.e. what the purpose of pet-rapture-contracts/prediction markets is. The benefit is that accurate beliefs are encouraged. It isn’t (noting your edit) that those who start out with better beliefs get to have more money, which they can make better use of. (That could potentially be a side benefit, but it isn’t the principal justification.)
Duping someone with clear mental disabilities into giving you their money is a canonical case of something that normal people regard as wrong. Regression to that canonical case, combined with the appeal to subjuctive reciprocity (“Golden Rule” here) was my argument. When someone’s morality permits them this much, there’s not much more I can offer, as any obvious common ground is clearly missing, and efforts to cover the gap not worth my time.
I would not want someone to so milk me out of my money if were schizophrenic, even and especially if they could come up with tortured rationalizations about how they deserve the money more, and gosh, I’m an adult and all so I had it coming. The Golden Rule intuition (again, a special case of subjuctive reciprocity, not far from TDT) needs a much better justification for me to abandon it than “hey! I’m missing out on schizophrenics I could money pump”, so no, that alternative does not immediately suggest itself to me.
A large portion of the population considers cryonics to be a delusion. Would you support a law preventing cryonics institutions from “duping” people out of their money? If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
You, like me, should be seeking to identify where you draw the line, and asking that I give more specificity in the line I draw than in the line you draw is a clear sign you’re not taking this seriously enough to justify a response.
When you’re ready to re-introduce some symmetry, I’ll be glad to go forward. You can start with a (not necessarily computable) description of where you would draw the line.
If the person running the Rapture pet insurance does not actually have the resources to take care of the pets in the event of Rapture, or would not honor their commitment if it took place, then yes, that would be the appropriate analogy, and I would agree that it’s dishonest. But if a person who doesn’t believe cryonics will work still puts in their best effort to get it to work anyway, to outcompete the other institutions offering cryonics services, then do their beliefs on the issue really matter?
Can you point out where you did so? I haven’t noticed anything to this effect.
I would personally draw the line at providing services that people would, on average, wish in hindsight that they had not spent their money on, or that only people who are not qualified to take care of themselves would want. If large numbers of people signed up for the service on a particular day, and when it didn’t occur on that day, discarded their belief in an imminent rapture, then the insurance policy would meet the first criterion, but most people who believe in an imminent rapture and pass an expected date simply move on to another expected date, or revise their estimate to “soon,” which would justify continued investment in the policy.
Counterfactual reciprocity can be defeated by other incentive considerations. Otherwise I would have to oppose all criminal punishment, since I certainly wouldn’t want to be punished in the counterfactual scenario where I had done something society regarded as bad. (As a matter of fact, my intuition does wince at the idea of putting people in prison etc.)
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
It isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”? In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
Appealing to moral intuitions isn’t inherently deontological—it’s how you identify common ground without requiring that your opponent follow a specific mode of reasoning. And the fact that the people disagree with me are deontologists—with contorted moral calculuses that “coincidentally” let them do what they wanted anyway—is precisely why their disagreement so shocked me!
There is irony, though, but it lies in how Adelene finds it more morally repugnant to call something “lame”—because of the harm to people with disabilities—than to actually money-pump a person with a real disability! (Does someone have a cluestick handy?)
No; my argument is analogous to the following, from the same link:
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure. The upside seems pretty clear to me—in fact this is just a kind of prediction market.
You think people who bet poorly on prediction markets deserve to lose their money. You consider this to be a species of prediction market. Ergo, you believe that people who believe in the rapture deserve to lose their money.
Which? (Note: “I can spend money stolen from stupid people better than they can” isn’t so much a “cost benefit calculation” as it is the “standard thief’s rationalization”.)
That’s a good point. However,
Premise denied. As a consequentialist, I don’t believe in “desert”. I don’t think criminals “deserve” to go to prison; I just reluctantly endorse the policy of putting them there (relative to not doing anything) because I calculate that the resulting incentive structure leads to better consequences. Likewise with prediction markets: for all that the suffering of indiviudals who can’t tell truth from falsehood may break my heart, it’s better on net if society at large is encouraged to do so.
I see it differently: as evidence that they actually believe their beliefs, in the sense of anticipating consequences, rather than merely professing and cheering.
Yes, but only provided their beliefs became accurate. And being willing to cause them regret in order to incentivize accurate beliefs is the whole point.
Right, you just believe in concepts functionally isomorphic to desert, purport having achieved a kind of enlightened view by this superficial disavowal of desert, and then time-suck your opponents through a terminology dispute. Trap spotted, and avoided.
Seriously, have you ever considered being charitable? Ever? Such as asking what data might have generated a comment, not automatically under the assumption that your interlocutor (notice I didn’t say “opponent”) is interested only in signaling enlightenment?
This is Less Wrong. In case you forgot.
The point of the paragraph whose first two sentences you quoted was to answer your question about what the cost-benefit calculation is, i.e. what the purpose of pet-rapture-contracts/prediction markets is. The benefit is that accurate beliefs are encouraged. It isn’t (noting your edit) that those who start out with better beliefs get to have more money, which they can make better use of. (That could potentially be a side benefit, but it isn’t the principal justification.)