The update is to answer one, making it explicit that the swords are known fake and you’re assuming that the guy won’t be able to figure that out, right?
This seems not to be relevant to the original issue, assuming that in the case of a rapture the atheists who are signed up with this service actually do the promised pet care. In any case it doesn’t answer my question—if you’re saying that beliefs are one thing and delusions are another, and it’s okay to bet on beliefs but not on delusions, how do you decide whether something is a belief or a delusion?
The update is to answer one, making it explicit that the swords are known fake and you’re assuming that the guy won’t be able to figure that out, right?
LCPW: You tell him they’re fake and he disagrees, insisting that they are real. Now what?
This seems not to be relevant to the original issue, assuming that in the case of a rapture the atheists who are signed up with this service actually do the promised pet care. In any case it doesn’t answer my question—if you’re saying that beliefs are one thing and delusions are another, and it’s okay to bet on beliefs but not on delusions, how do you decide whether something is a belief or a delusion?
In other words, you want me to give you a computable classifier that makes sufficient distinctions to resolve a moral dilemma. Sorry, I can’t do that. No one can.
What I can do is articulate my intuition on this enough to show where the fuzzy line is and why I think something falls beyond the fuzziness. If there are specific distinctions you think I should or shouldn’t be making, which would push this back to or past the fuzzy boundary, I’m quite interested in hearing it.
The update is to answer one, making it explicit that the swords are known fake and you’re assuming that the guy won’t be able to figure that out, right?
LCPW: You tell him they’re fake and he disagrees, insisting that they are real. Now what?
Personally? I’d sell him the swords, assuming there were no other issues. (I might decline to sell them to him on the grounds that he might try to hurt someone with them, but in that case what am I doing selling swords in the first place?)
What I can do is articulate my intuition on this enough to show where the fuzzy line is and why I think something falls beyond the fuzziness.
So do that?
I note that it wouldn’t be inaccurate to rephrase my question as a request for you to taboo ‘delusion’.
Wow, real classy, folks. AdeleneDawner posts that she’d take advantage of a delusional person, I scoff at the nobility of doing so, and I get modded down −5, with no explanation, while no one disagrees with her? Pat yourselves on the back, before you return to pulling the wings off birds.
You scoff with no argument while failing repeatedly to provide clarification that AdeneleneDawner is asking for. You haven’t done what you said you were trying to do, show why you think an action falls beyond the fuzzy line, but you’ve insulted others for a perceived crossing of it.
Why not sell the guy the swords? You’re taking it as obvious, but I don’t think it’s clear to others (it’s certainly not clear to me) what your reasoning is. The guy is probably schizophrenic, and so can’t be reasoned out of his delusions. He might be treatable with medication, but if he doesn’t have a legal guardian, nobody can force him to receive treatment as long as he’s staying on the right side of the law and isn’t a danger to himself or others. Plus, schizophrenia frequently does not respond to treatment, and if he’s able to take care of himself, full time treatment would probably do a great deal of harm to his quality of life, so the expected utility of having him institutionalized would probably be negative.
If you don’t have an argument for why selling him the swords leads to less utility, perhaps this is a moral intuition that should be dispensed with.
Duping someone with clear mental disabilities into giving you their money is a canonical case of something that normal people regard as wrong. Regression to that canonical case, combined with the appeal to subjuctive reciprocity (“Golden Rule” here) was my argument. When someone’s morality permits them this much, there’s not much more I can offer, as any obvious common ground is clearly missing, and efforts to cover the gap not worth my time.
I would not want someone to so milk me out of my money if were schizophrenic, even and especially if they could come up with tortured rationalizations about how they deserve the money more, and gosh, I’m an adult and all so I had it coming. The Golden Rule intuition (again, a special case of subjuctive reciprocity, not far from TDT) needs a much better justification for me to abandon it than “hey! I’m missing out on schizophrenics I could money pump”, so no, that alternative does not immediately suggest itself to me.
A large portion of the population considers cryonics to be a delusion. Would you support a law preventing cryonics institutions from “duping” people out of their money? If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
Would you support a law preventing cryonics institutions from “duping” people out of their money?
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
You, like me, should be seeking to identify where you draw the line, and asking that I give more specificity in the line I draw than in the line you draw is a clear sign you’re not taking this seriously enough to justify a response.
When you’re ready to re-introduce some symmetry, I’ll be glad to go forward. You can start with a (not necessarily computable) description of where you would draw the line.
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
If the person running the Rapture pet insurance does not actually have the resources to take care of the pets in the event of Rapture, or would not honor their commitment if it took place, then yes, that would be the appropriate analogy, and I would agree that it’s dishonest. But if a person who doesn’t believe cryonics will work still puts in their best effort to get it to work anyway, to outcompete the other institutions offering cryonics services, then do their beliefs on the issue really matter?
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
Can you point out where you did so? I haven’t noticed anything to this effect.
I would personally draw the line at providing services that people would, on average, wish in hindsight that they had not spent their money on, or that only people who are not qualified to take care of themselves would want. If large numbers of people signed up for the service on a particular day, and when it didn’t occur on that day, discarded their belief in an imminent rapture, then the insurance policy would meet the first criterion, but most people who believe in an imminent rapture and pass an expected date simply move on to another expected date, or revise their estimate to “soon,” which would justify continued investment in the policy.
Counterfactual reciprocity can be defeated by other incentive considerations. Otherwise I would have to oppose all criminal punishment, since I certainly wouldn’t want to be punished in the counterfactual scenario where I had done something society regarded as bad. (As a matter of fact, my intuition does wince at the idea of putting people in prison etc.)
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
It isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”? In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
Appealing to moral intuitions isn’t inherently deontological—it’s how you identify common ground without requiring that your opponent follow a specific mode of reasoning. And the fact that the people disagree with me are deontologists—with contorted moral calculuses that “coincidentally” let them do what they wanted anyway—is precisely why their disagreement so shocked me!
There is irony, though, but it lies in how Adelene finds it more morally repugnant to call something “lame”—because of the harm to people with disabilities—than to actually money-pump a person with a real disability! (Does someone have a cluestick handy?)
LIt isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”?
No; my argument is analogous to the following, from the same link:
“Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.”
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure. The upside seems pretty clear to me—in fact this is just a kind of prediction market.
No; my argument is analogous to the following, from the same link: …The upside seems pretty clear to me—in fact this is just a kind of prediction market.
You think people who bet poorly on prediction markets deserve to lose their money. You consider this to be a species of prediction market. Ergo, you believe that people who believe in the rapture deserve to lose their money.
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
Which? (Note: “I can spend money stolen from stupid people better than they can” isn’t so much a “cost benefit calculation” as it is the “standard thief’s rationalization”.)
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure.
That’s a good point. However,
I see the potential customers as people who have “contracted a minor case of reason”, for lack of a better term. They seek enough logical closure over their beliefs to care about their pets after a rapture. That’s evidence that their reflective equilibrium does not lie in going whole hog with the rapture thing, and it is not a mere case of weird preferences, but preferences you can expect to change in a way that will regret this purchase. This puts its closer to the dark arts/akrasia-pump/fraud category.
You think people who bet poorly on prediction markets deserve to lose their money
Premise denied. As a consequentialist, I don’t believe in “desert”. I don’t think criminals “deserve” to go to prison; I just reluctantly endorse the policy of putting them there (relative to not doing anything) because I calculate that the resulting incentive structure leads to better consequences. Likewise with prediction markets: for all that the suffering of indiviudals who can’t tell truth from falsehood may break my heart, it’s better on net if society at large is encouraged to do so.
I see the potential customers as people who have “contracted a minor case of reason”, for lack of a better term. They seek enough logical closure over their beliefs to care about their pets after a rapture. That’s evidence that their reflective equilibrium does not lie in going whole hog with the rapture thing,
I see it differently: as evidence that they actually believe their beliefs, in the sense of anticipating consequences, rather than merely professing and cheering.
and it is not a mere case of weird preferences, but preferences you can expect to change in a way that will regret this purchase
Yes, but only provided their beliefs became accurate. And being willing to cause them regret in order to incentivize accurate beliefs is the whole point.
Premise denied. As a consequentialist, I don’t believe in “desert”.
Right, you just believe in concepts functionally isomorphic to desert, purport having achieved a kind of enlightened view by this superficial disavowal of desert, and then time-suck your opponents through a terminology dispute. Trap spotted, and avoided.
Seriously, have you ever considered being charitable? Ever? Such as asking what data might have generated a comment, not automatically under the assumption that your interlocutor (notice I didn’t say “opponent”) is interested only in signaling enlightenment?
This is Less Wrong. In case you forgot.
The point of the paragraph whose first two sentences you quoted was to answer your question about what the cost-benefit calculation is, i.e. what the purpose of pet-rapture-contracts/prediction markets is. The benefit is that accurate beliefs are encouraged. It isn’t (noting your edit) that those who start out with better beliefs get to have more money, which they can make better use of. (That could potentially be a side benefit, but it isn’t the principal justification.)
How do you know she didn’t just get more upvotes than downvotes?
As an extreme example: if one person gets 105 upvotes and 100 downvotes, and another gets 100 upvotes and 105 downvotes, that’s not much of a difference.
Don’t assume too much about votes. Some are probably from people who think you can do better. You show a lot of self awareness by describing yourself as “scoff[ing]”, you’re just a small step away from upvoted comments expressing the same opinion of yours on this same topic.
See update.
The update is to answer one, making it explicit that the swords are known fake and you’re assuming that the guy won’t be able to figure that out, right?
This seems not to be relevant to the original issue, assuming that in the case of a rapture the atheists who are signed up with this service actually do the promised pet care. In any case it doesn’t answer my question—if you’re saying that beliefs are one thing and delusions are another, and it’s okay to bet on beliefs but not on delusions, how do you decide whether something is a belief or a delusion?
LCPW: You tell him they’re fake and he disagrees, insisting that they are real. Now what?
In other words, you want me to give you a computable classifier that makes sufficient distinctions to resolve a moral dilemma. Sorry, I can’t do that. No one can.
What I can do is articulate my intuition on this enough to show where the fuzzy line is and why I think something falls beyond the fuzziness. If there are specific distinctions you think I should or shouldn’t be making, which would push this back to or past the fuzzy boundary, I’m quite interested in hearing it.
Personally? I’d sell him the swords, assuming there were no other issues. (I might decline to sell them to him on the grounds that he might try to hurt someone with them, but in that case what am I doing selling swords in the first place?)
So do that?
I note that it wouldn’t be inaccurate to rephrase my question as a request for you to taboo ‘delusion’.
Wow, real classy, folks. AdeleneDawner posts that she’d take advantage of a delusional person, I scoff at the nobility of doing so, and I get modded down −5, with no explanation, while no one disagrees with her? Pat yourselves on the back, before you return to pulling the wings off birds.
You scoff with no argument while failing repeatedly to provide clarification that AdeneleneDawner is asking for. You haven’t done what you said you were trying to do, show why you think an action falls beyond the fuzzy line, but you’ve insulted others for a perceived crossing of it.
Why not sell the guy the swords? You’re taking it as obvious, but I don’t think it’s clear to others (it’s certainly not clear to me) what your reasoning is. The guy is probably schizophrenic, and so can’t be reasoned out of his delusions. He might be treatable with medication, but if he doesn’t have a legal guardian, nobody can force him to receive treatment as long as he’s staying on the right side of the law and isn’t a danger to himself or others. Plus, schizophrenia frequently does not respond to treatment, and if he’s able to take care of himself, full time treatment would probably do a great deal of harm to his quality of life, so the expected utility of having him institutionalized would probably be negative.
If you don’t have an argument for why selling him the swords leads to less utility, perhaps this is a moral intuition that should be dispensed with.
Duping someone with clear mental disabilities into giving you their money is a canonical case of something that normal people regard as wrong. Regression to that canonical case, combined with the appeal to subjuctive reciprocity (“Golden Rule” here) was my argument. When someone’s morality permits them this much, there’s not much more I can offer, as any obvious common ground is clearly missing, and efforts to cover the gap not worth my time.
I would not want someone to so milk me out of my money if were schizophrenic, even and especially if they could come up with tortured rationalizations about how they deserve the money more, and gosh, I’m an adult and all so I had it coming. The Golden Rule intuition (again, a special case of subjuctive reciprocity, not far from TDT) needs a much better justification for me to abandon it than “hey! I’m missing out on schizophrenics I could money pump”, so no, that alternative does not immediately suggest itself to me.
A large portion of the population considers cryonics to be a delusion. Would you support a law preventing cryonics institutions from “duping” people out of their money? If not, we’ll probably have to return to the issue, which you still haven’t addressed, of how to practically distinguish between delusions and beliefs.
The situation here is analogous to an anti-cryonics institution selling cryonics services while not actually setting aside the necessary resources to do so, on the grounds that “they’re wrong, so it won’t make a noticeable difference either way”.
Yeah, that should be illegal. Why shouldn’t it?
I did address it; if you want to criticize my discussion of it, fine—that’s something we can talk about. But if you want to persist in this demand for a computable procedure for resolving a moral dilemma (by distinguishing beliefs from delusion), that is a blatantly lopsided shifting of the burden here.
You, like me, should be seeking to identify where you draw the line, and asking that I give more specificity in the line I draw than in the line you draw is a clear sign you’re not taking this seriously enough to justify a response.
When you’re ready to re-introduce some symmetry, I’ll be glad to go forward. You can start with a (not necessarily computable) description of where you would draw the line.
If the person running the Rapture pet insurance does not actually have the resources to take care of the pets in the event of Rapture, or would not honor their commitment if it took place, then yes, that would be the appropriate analogy, and I would agree that it’s dishonest. But if a person who doesn’t believe cryonics will work still puts in their best effort to get it to work anyway, to outcompete the other institutions offering cryonics services, then do their beliefs on the issue really matter?
Can you point out where you did so? I haven’t noticed anything to this effect.
I would personally draw the line at providing services that people would, on average, wish in hindsight that they had not spent their money on, or that only people who are not qualified to take care of themselves would want. If large numbers of people signed up for the service on a particular day, and when it didn’t occur on that day, discarded their belief in an imminent rapture, then the insurance policy would meet the first criterion, but most people who believe in an imminent rapture and pass an expected date simply move on to another expected date, or revise their estimate to “soon,” which would justify continued investment in the policy.
Counterfactual reciprocity can be defeated by other incentive considerations. Otherwise I would have to oppose all criminal punishment, since I certainly wouldn’t want to be punished in the counterfactual scenario where I had done something society regarded as bad. (As a matter of fact, my intuition does wince at the idea of putting people in prison etc.)
In this case, not having a norm against money-pumping the deluded helps maintain an incentive against having delusional beliefs. Yes, there are also downsides; but it isn’t obvious to me that they outweigh the benefits.
I am very surprised to find you of all people engaging in a sanctimonious appeal to common moral intuitions—the irony being compounded by the fact that some commenters with known deontological leanings, from whom one might sooner have expected such a thing a priori, are taking the exact opposite line here.
It isn’t obvious how similar your argument is to the very one in the link you provided, that “stupid people deserve to suffer!”? In any case, being schizophrenic is not some point on a continuous line from “expert rationalist” to “anti-reductionist”—it doesn’t respond to these kinds of incentives, so your upside isn’t even there.
Appealing to moral intuitions isn’t inherently deontological—it’s how you identify common ground without requiring that your opponent follow a specific mode of reasoning. And the fact that the people disagree with me are deontologists—with contorted moral calculuses that “coincidentally” let them do what they wanted anyway—is precisely why their disagreement so shocked me!
There is irony, though, but it lies in how Adelene finds it more morally repugnant to call something “lame”—because of the harm to people with disabilities—than to actually money-pump a person with a real disability! (Does someone have a cluestick handy?)
No; my argument is analogous to the following, from the same link:
That is: “Yes, it’s unfortunate that some schizophrenics may be bilked unfairly, but we’re going to let people sign pet-rapture contracts anyway because we did this cost-benefit calculation.”
Most rapture-believers aren’t schizophrenic. They’re mostly just people who arrive at beliefs via non-rational processes encouraged by their social structure. The upside seems pretty clear to me—in fact this is just a kind of prediction market.
You think people who bet poorly on prediction markets deserve to lose their money. You consider this to be a species of prediction market. Ergo, you believe that people who believe in the rapture deserve to lose their money.
Which? (Note: “I can spend money stolen from stupid people better than they can” isn’t so much a “cost benefit calculation” as it is the “standard thief’s rationalization”.)
That’s a good point. However,
Premise denied. As a consequentialist, I don’t believe in “desert”. I don’t think criminals “deserve” to go to prison; I just reluctantly endorse the policy of putting them there (relative to not doing anything) because I calculate that the resulting incentive structure leads to better consequences. Likewise with prediction markets: for all that the suffering of indiviudals who can’t tell truth from falsehood may break my heart, it’s better on net if society at large is encouraged to do so.
I see it differently: as evidence that they actually believe their beliefs, in the sense of anticipating consequences, rather than merely professing and cheering.
Yes, but only provided their beliefs became accurate. And being willing to cause them regret in order to incentivize accurate beliefs is the whole point.
Right, you just believe in concepts functionally isomorphic to desert, purport having achieved a kind of enlightened view by this superficial disavowal of desert, and then time-suck your opponents through a terminology dispute. Trap spotted, and avoided.
Seriously, have you ever considered being charitable? Ever? Such as asking what data might have generated a comment, not automatically under the assumption that your interlocutor (notice I didn’t say “opponent”) is interested only in signaling enlightenment?
This is Less Wrong. In case you forgot.
The point of the paragraph whose first two sentences you quoted was to answer your question about what the cost-benefit calculation is, i.e. what the purpose of pet-rapture-contracts/prediction markets is. The benefit is that accurate beliefs are encouraged. It isn’t (noting your edit) that those who start out with better beliefs get to have more money, which they can make better use of. (That could potentially be a side benefit, but it isn’t the principal justification.)
How do you know she didn’t just get more upvotes than downvotes?
As an extreme example: if one person gets 105 upvotes and 100 downvotes, and another gets 100 upvotes and 105 downvotes, that’s not much of a difference.
Don’t assume too much about votes. Some are probably from people who think you can do better. You show a lot of self awareness by describing yourself as “scoff[ing]”, you’re just a small step away from upvoted comments expressing the same opinion of yours on this same topic.
Real noble of you.
See second update.