The whole idea conflates refusal to accept the bet for reasons that apply to bets in general, with refusing to accept the bet because you’re not really confident that UFOs are mundane.
If there are reasons to refuse bets in general, that apply to the LessWrong community in aggregate, something has gone horribly horribly wrong.
No one is requiring you personally to participate, and I doubt anyone here is going to judge you for reluctance to engage in bets with people from the Internet who you don’t know. Certainly I wouldn’t. But if no one took up this bet, it would have a meaningful impact on my view of the community as a whole.
It is my opinion that for the LessWrong community in aggregate, something has gone horribly horribly wrong.
At a minimum, LWers should have 1) observed that normies don’t bet like this and 2) applied Chesterton’s Fence.
It’s often hard to give an exhaustive, bulletproof, explanation of why normies act in some way that does, in fact, make sense as a way to act. Rationalists have a habit of saying “well, I don’t see a rational reason for X, so I can just discard X”. That’s what Chesterton’s Fence is about avoiding.
It’s easy to explain why people who hold beliefs for signaling purposes don’t want to bet on those beliefs. It interferes with getting status points by exposing bullshit.
As someone who’s gambled professionally, I believe the (Chesterton’s) fence around betting for normies exists because most bets are essentially scams, which is why I’m entirely okay knocking it down for LWers. Let me elaborate.
Probability is complicated and abstract. Not only that, human intuition is really bad at it. Nearly all “bets” throughout our modern history have not been the kind of skin-in-the-game prediction competition we’re praising on lesswrong—they’ve been predatory. One person who understands probability using emotional and logical minipulation to take someone else’s money, who doesn’t.
Society protects people with taboos. “Betting is icky” is a meme that can easily spread, and will quickly reproduce, becuase it’s adaptive in this betting environment. [Dissertation about Bayesian reasoning, calibration, and the Kelley Criterion] is NOT a meme that can easily spread, because it’s far too complex and long, and thus it will not reproduce (even though it is also adaptive).
Or at least, it can’t spread in the normie population, but it CAN on LessWrong, which is why, on LessWrong, most bets are not scams. They are, in fact, what the scammers falsly proclaimed their own bets to be—friendly competitions wherein two people who disagree about the future both put skin in the game.
The sportsbooks and casinos we have today are predators. From their celebrity endorsements, to the way they form their commercials, to their messaging around winning (and especially parlays), they effectively lie about what they’re selling while trying to create addicts. I’ve engaged with many people across the betting experience spectrum (from other winners, to big losers, to smart people, who were small losers, and realized they needed to quit), and it’s pretty clear to me that “betting = icky” is a reasonable idea, even today The fence around it is not Chesterton’s, though. It’s there to help regular people avoid a certain species of predator gunning for their capital.
I don’t doubt that a lot is wrong with the LW community, both in aggregate and among many individuals. I’m not sure WHAT wrongness you’re pointing out, though.
There are good reasons for exploring normie behavior and being careful of things you don’t understand (Chesterton’s fence). They mostly apply strongly when talking about activities at scale, especially if they include normies in the actor or patient list.
Wagering as a way to signal belief, to elicit evidence of different beliefs, and to move resources to the individuals who are less wrong than the market (or counterparty in a 2-party wager) is pretty well-studied, and the puzzle of why most humans don’t do it more is usually attributed to those illegible reasons, which include signaling, status, and other outside-of-wager considerations.
IMO, that’s enough understanding to tear down the fence, at least when people who choose not to participate aren’t penalized for that choice.
That seems so clear to me that I’m surprised there can be any objection. Can you restate why you think this indicates “horribly wrong”, either as a community, or as the individuals choosing to offer wagers?
I can’t give you an exhaustive list of the problems I have with betting, but some reasons:
Properly phrasing a bet is difficult, like writing a computer program that runs perfectly the first time, or phrasing a wish to a genie. I’m no good at avoiding loopholes, and there’s no shortage of rationalists who’d exploit them as long as they can get a win. And just saying “I won’t prey on any technicalities” isn’t enough without being able to read your mind and know what you consider a technicality.
Betting has social overhead. This is the “explain to your parents/wife/children why you bet this money” scenario.
Some people value money differently than I do. Some people just have glitchy HumanOS 1.0 which leads them to spend money irrationally. Some people are just overconfident. If I bet against such a person I may win money inbe an overall winner after X years, but until the X years are up, I’ll have essentially lost the argument, because my opponent was willing to spend money—there must be some substance behind his argument or he wouldn’t do that, right?
As others have pointed out, it’s a bad idea to trust random people on the Internet to pay me money in X years. “I have a reputation” is not enough when real money is involved. And I don’t have access to the sophisticated information used by financial services in the real world to determine how likely someone is to be able to pay money in the future based on past performance. And it’s not unknown for a trusted person to run away with money. (That wasn’t even the incident I was thinking of, but I couldn’t find that one.) (Edit: does not apply, since you’d be the one paying the money)
To get over the Chesterton’s Fence bar, you’re going to need more than just “well, it’s been studied and people do it for irrational reasons”. Social customs evolve as memes, and something that people don’t do for reason X may nevertheless have persisted because it is, for reason X, beneficial.
At any rate, I haven’t seen your studies and I’m not going to trust that you’ve described them properly without some links.
Even if I did get links and read the studies, we get into epistemic learned helplessness. I wouldn’t change my mind about betting just because the studies seem convincing and I can’t find any flaw in them using solely my own knowledge. I’d like to at least hear from opponents of those studies and see how convincing they are, and see how controversial the studies are. Then I’d have to check whether they might be subject to the replication crisis. And at this point, the overhead of researching betting will itself make most bets unprofitable.
Rationalists have a habit of stringing together poorly founded estimates to get more poorly founded estimates and acting based on them. I don’t agree with this practice, but concluding that I should risk money here would imply paying attention to poorly founded estimates.
Thanks for the detail—it makes me realize I responded unclearly. I don’t understand your claim (presumably based on this offer of a wager) that “the LessWrong community in aggregate, something has gone horribly horribly wrong.”
I don’t disagree with most of your points—betting is a bit unusual (in some groups; in some it’s trivially common), there are high transaction costs, and practical considerations outweigh the information value in most cases.
I don’t intend to say (and I don’t THINK anyone is saying) you should undertake bets that make you uncomfortable. I do believe (but tend not to proselytize) that aspiring rationalists benefit a lot by using a betting mindset in considering their beliefs: putting a number to it and using the intuition pump of how you imagine feeling winning or losing a bet is quite instructive. In cases where it’s practical, actually betting reifies this intuition, and you get to experience actually changing your probability estimate and acknowledging it with an extremely-hard-to-fool-yourself-or-others signal.
I don’t actually follow the chesterton’s fence argument. What is the taboo you’re worried that you don’t understand well enough to break (in some circumstances)? “normies don’t do this” is a rotten and decrepit enough fence that I don’t think it’s sufficient on it’s own for almost anything that’s voluntarily chosen by participants and has plausibly low (not provably, of course, but it’s not much of a fence to start with) externalities.
I don’t understand your claim (presumably based on this offer of a wager) that “the LessWrong community in aggregate, something has gone horribly horribly wrong.”
If you’re asking how I would distinguish “horribly, horribly, wrong” from “just somewhat horribly wrong” or plain “wrong”, my answer would be that there’s no real distinction and I just used that particular turn of phrase because that’s the phrase that evand used.
I don’t intend to say (and I don’t THINK anyone is saying) you should undertake bets that make you uncomfortable.
Sure, but “bets that make me uncomfortable” is “all rationalist bets”.
“normies don’t do this” is a rotten and decrepit enough fence that I don’t think it’s sufficient on it’s own
I should be clearer yet. I’m wondering how you distinguish “the community in aggregate has gone (just somewhat) horribly wrong” from “I don’t think this particular mechanism works for everyone, certainly not me”.
If making actual wagers makes you uncomfortable, don’t do it. If analyzing many of your beliefs in a bet-like framing (probability distribution of future experiences, with enough concreteness to be resolvable at some future point) is uncomfortable, I’d recommend giving that part of it another go, as it’s pretty generally useful as a way to avoid fuzzy thinking (and fuzzy communication, which I consider a different thing).
In any case, thanks for the discussion—I always appreciate hearing from those with different beliefs and models of how to improve our individual and shared beliefs about the world.
I would also take issue with the “mundane” part. What does that even mean? Any explanation that is good enough to cover all UFO cases with their myriad of physics-defying feats, is in itself a proof of supertechnology which should also be under the bet.
For example, an explanation that the supposed UFOs are really experimental military aircraft would simply mean that the military possesses technology that is effectively “magic” compared to the civilian aircraft technology. If you witness a flying object that can push Mach 10 effortlessly and takes instant turns without any inertia, does it matter if this is an alien craft or human military craft? It still should belong on the list.
The whole idea conflates refusal to accept the bet for reasons that apply to bets in general, with refusing to accept the bet because you’re not really confident that UFOs are mundane.
If there are reasons to refuse bets in general, that apply to the LessWrong community in aggregate, something has gone horribly horribly wrong.
No one is requiring you personally to participate, and I doubt anyone here is going to judge you for reluctance to engage in bets with people from the Internet who you don’t know. Certainly I wouldn’t. But if no one took up this bet, it would have a meaningful impact on my view of the community as a whole.
It is my opinion that for the LessWrong community in aggregate, something has gone horribly horribly wrong.
At a minimum, LWers should have 1) observed that normies don’t bet like this and 2) applied Chesterton’s Fence.
It’s often hard to give an exhaustive, bulletproof, explanation of why normies act in some way that does, in fact, make sense as a way to act. Rationalists have a habit of saying “well, I don’t see a rational reason for X, so I can just discard X”. That’s what Chesterton’s Fence is about avoiding.
It’s easy to explain why people who hold beliefs for signaling purposes don’t want to bet on those beliefs. It interferes with getting status points by exposing bullshit.
As someone who’s gambled professionally, I believe the (Chesterton’s) fence around betting for normies exists because most bets are essentially scams, which is why I’m entirely okay knocking it down for LWers. Let me elaborate.
Probability is complicated and abstract. Not only that, human intuition is really bad at it. Nearly all “bets” throughout our modern history have not been the kind of skin-in-the-game prediction competition we’re praising on lesswrong—they’ve been predatory. One person who understands probability using emotional and logical minipulation to take someone else’s money, who doesn’t.
Society protects people with taboos. “Betting is icky” is a meme that can easily spread, and will quickly reproduce, becuase it’s adaptive in this betting environment. [Dissertation about Bayesian reasoning, calibration, and the Kelley Criterion] is NOT a meme that can easily spread, because it’s far too complex and long, and thus it will not reproduce (even though it is also adaptive).
Or at least, it can’t spread in the normie population, but it CAN on LessWrong, which is why, on LessWrong, most bets are not scams. They are, in fact, what the scammers falsly proclaimed their own bets to be—friendly competitions wherein two people who disagree about the future both put skin in the game.
The sportsbooks and casinos we have today are predators. From their celebrity endorsements, to the way they form their commercials, to their messaging around winning (and especially parlays), they effectively lie about what they’re selling while trying to create addicts. I’ve engaged with many people across the betting experience spectrum (from other winners, to big losers, to smart people, who were small losers, and realized they needed to quit), and it’s pretty clear to me that “betting = icky” is a reasonable idea, even today The fence around it is not Chesterton’s, though. It’s there to help regular people avoid a certain species of predator gunning for their capital.
We can safely knock it down on here.
I don’t doubt that a lot is wrong with the LW community, both in aggregate and among many individuals. I’m not sure WHAT wrongness you’re pointing out, though.
There are good reasons for exploring normie behavior and being careful of things you don’t understand (Chesterton’s fence). They mostly apply strongly when talking about activities at scale, especially if they include normies in the actor or patient list.
Wagering as a way to signal belief, to elicit evidence of different beliefs, and to move resources to the individuals who are less wrong than the market (or counterparty in a 2-party wager) is pretty well-studied, and the puzzle of why most humans don’t do it more is usually attributed to those illegible reasons, which include signaling, status, and other outside-of-wager considerations.
IMO, that’s enough understanding to tear down the fence, at least when people who choose not to participate aren’t penalized for that choice.
That seems so clear to me that I’m surprised there can be any objection. Can you restate why you think this indicates “horribly wrong”, either as a community, or as the individuals choosing to offer wagers?
I can’t give you an exhaustive list of the problems I have with betting, but some reasons:
Properly phrasing a bet is difficult, like writing a computer program that runs perfectly the first time, or phrasing a wish to a genie. I’m no good at avoiding loopholes, and there’s no shortage of rationalists who’d exploit them as long as they can get a win. And just saying “I won’t prey on any technicalities” isn’t enough without being able to read your mind and know what you consider a technicality.
Betting has social overhead. This is the “explain to your parents/wife/children why you bet this money” scenario.
Some people value money differently than I do. Some people just have glitchy HumanOS 1.0 which leads them to spend money irrationally. Some people are just overconfident. If I bet against such a person I may
win money inbe an overall winner after X years, but until the X years are up, I’ll have essentially lost the argument, because my opponent was willing to spend money—there must be some substance behind his argument or he wouldn’t do that, right?As others have pointed out, it’s a bad idea to trust random people on the Internet to pay me money in X years. “I have a reputation” is not enough when real money is involved. And I don’t have access to the sophisticated information used by financial services in the real world to determine how likely someone is to be able to pay money in the future based on past performance. And it’s not unknown for a trusted person to run away with money. (That wasn’t even the incident I was thinking of, but I couldn’t find that one.)(Edit: does not apply, since you’d be the one paying the money)To get over the Chesterton’s Fence bar, you’re going to need more than just “well, it’s been studied and people do it for irrational reasons”. Social customs evolve as memes, and something that people don’t do for reason X may nevertheless have persisted because it is, for reason X, beneficial.
At any rate, I haven’t seen your studies and I’m not going to trust that you’ve described them properly without some links.
Even if I did get links and read the studies, we get into epistemic learned helplessness. I wouldn’t change my mind about betting just because the studies seem convincing and I can’t find any flaw in them using solely my own knowledge. I’d like to at least hear from opponents of those studies and see how convincing they are, and see how controversial the studies are. Then I’d have to check whether they might be subject to the replication crisis. And at this point, the overhead of researching betting will itself make most bets unprofitable.
Rationalists have a habit of stringing together poorly founded estimates to get more poorly founded estimates and acting based on them. I don’t agree with this practice, but concluding that I should risk money here would imply paying attention to poorly founded estimates.
Thanks for the detail—it makes me realize I responded unclearly. I don’t understand your claim (presumably based on this offer of a wager) that “the LessWrong community in aggregate, something has gone horribly horribly wrong.”
I don’t disagree with most of your points—betting is a bit unusual (in some groups; in some it’s trivially common), there are high transaction costs, and practical considerations outweigh the information value in most cases.
I don’t intend to say (and I don’t THINK anyone is saying) you should undertake bets that make you uncomfortable. I do believe (but tend not to proselytize) that aspiring rationalists benefit a lot by using a betting mindset in considering their beliefs: putting a number to it and using the intuition pump of how you imagine feeling winning or losing a bet is quite instructive. In cases where it’s practical, actually betting reifies this intuition, and you get to experience actually changing your probability estimate and acknowledging it with an extremely-hard-to-fool-yourself-or-others signal.
I don’t actually follow the chesterton’s fence argument. What is the taboo you’re worried that you don’t understand well enough to break (in some circumstances)? “normies don’t do this” is a rotten and decrepit enough fence that I don’t think it’s sufficient on it’s own for almost anything that’s voluntarily chosen by participants and has plausibly low (not provably, of course, but it’s not much of a fence to start with) externalities.
If you’re asking how I would distinguish “horribly, horribly, wrong” from “just somewhat horribly wrong” or plain “wrong”, my answer would be that there’s no real distinction and I just used that particular turn of phrase because that’s the phrase that evand used.
Sure, but “bets that make me uncomfortable” is “all rationalist bets”.
I disagree.
I should be clearer yet. I’m wondering how you distinguish “the community in aggregate has gone (just somewhat) horribly wrong” from “I don’t think this particular mechanism works for everyone, certainly not me”.
If making actual wagers makes you uncomfortable, don’t do it. If analyzing many of your beliefs in a bet-like framing (probability distribution of future experiences, with enough concreteness to be resolvable at some future point) is uncomfortable, I’d recommend giving that part of it another go, as it’s pretty generally useful as a way to avoid fuzzy thinking (and fuzzy communication, which I consider a different thing).
In any case, thanks for the discussion—I always appreciate hearing from those with different beliefs and models of how to improve our individual and shared beliefs about the world.
I would also take issue with the “mundane” part. What does that even mean? Any explanation that is good enough to cover all UFO cases with their myriad of physics-defying feats, is in itself a proof of supertechnology which should also be under the bet.
For example, an explanation that the supposed UFOs are really experimental military aircraft would simply mean that the military possesses technology that is effectively “magic” compared to the civilian aircraft technology. If you witness a flying object that can push Mach 10 effortlessly and takes instant turns without any inertia, does it matter if this is an alien craft or human military craft? It still should belong on the list.