I have little idea what this is trying to convey, since
“Alice: Proposes some clever thing to bet on with crystal clear resolution criteria that perfectly captures everything”
hides the actual meaning of everything that follows. Like, is Bob betting on his actual beliefs exactly matching some profile, or in some direction, or a given range, or on object-level consequences of his beliefs, or what? Me trying to guess at what is meant from the subsequent text wasn’t much use, since it appears to be talking about both beliefs and consequences and seems to be inconsistent with every particular hypothesis I formed in the few minutes I was thinking about it.
On top of that, the escalation totally pinged my scam-detector and so now I’m wondering whether that’s an intentional part of the story or not.
My thoughts on the last line were along the lines of “Alice related that she was behaving like a risk-seeking gullible idiot, and that’s why she got the job? Confusing! It’s more evidence that Alice is actually trying to scam Bob, lying about that as part of the scam to connect the same risky action she wants Bob to take with his daughter’s career goals.”
However that doesn’t seem to be anything that matches up with the post’s title, so doesn’t seem likely to be anything that the author intended to convey.
Here is what I was going for with this post (spoilers). I’m not sure how much of it was clear.
Often times if you ask someone for their belief, they’ll say A. But then if you push them on it, ie. by asking them to bet, they’ll backpedal into belief B instead of belief A (B is often more moderate than A). And then if you push them more, or ask them to bet a larger amount of money, they’ll backpedal again from belief B to belief C.
What is going on here? If the person will start off with belief A but end up with belief F (going from A, to B, to C, to D, to E, to F) after being pushed, maybe we can call belief F they’re “true” belief?
Nah. I don’t think “true” is a good way to label it. Too generic. It kinda begs the question of “what do you mean by true?”. I think “best effort belief” is a pretty good label though. After all, it’s the belief that one would arrive at after making their “best effort”. They type of effort they’d make if, say, $100,000 were on the line.
I also like “best effort” because it can be easily extended to phrases like “3/4 effort belief”, “reactionary belief”, and “extraordinary effort belief”.
This seems like a useful frame to look at things through. Largely because I think it makes it salient that most of our beliefs are not very high-effort. It’d be a cool experiment to try to quantify this. Eg. where you do what Alice did to Bob and see how much the subjects’ confidence changes in response to the bet size.
hides the actual meaning of everything that follows
In some sense, I agree that it does. But not in an important sense.
Imagine that the framing of the conversation was even more abstract. Ie. instead of being about “immigration policy”, it were about “X”. Maybe Alice asks Bob how confident he is in X, he starts off by saying 95%, when offered a bet he backpedals to 90%, then 80%, so on and so forth.
Such a framing would hide something, but the important information (see spoiler above) would be retained.
On top of that, the escalation totally pinged my scam-detector and so now I’m wondering whether that’s an intentional part of the story or not.
No, it was not. I tried to portray things as:
Taking place in a bit of a dath ilanian world (President Hanson; Bayeslight; an oddsmaker being a prestigious job).
Alice and Bob are longtime friends.
So then, I don’t think there’s anything sketchy about wanting to bet on your beliefs. I think it’s rational and virtuous.
In our universe though, I agree that it would usually be sketchy, but I could imagine scenarios where it wouldn’t be. For example, if I had been part of a rationalist community for many years, known someone very well like Bob knows Alice, and they wanted to bet with me like this, I wouldn’t necessarily have scam-detector alarms go off. As another example, I get the sense that this sort of thing could happen in certain circles of high-level gamblers with good epistemics.
My thoughts on the last line were along the lines of “Alice related that she was behaving like a risk-seeking gullible idiot, and that’s why she got the job? Confusing! It’s more evidence that Alice is actually trying to scam Bob, lying about that as part of the scam to connect the same risky action she wants Bob to take with his daughter’s career goals.”
Yeah, that’s not what I was going for. Here’s what I was going for:
Betting on beliefs is a virtue that is both useful on it’s own, causally related to other good things, and correlated with a bunch of other good things. The person who hired Alice did so for these reasons.
Even in dath ilan, having someone push to bet orders of magnitude more is sketchy. It’s doubly sketchy in the case of Alice’s story where it’s “Some lady I met at this bar” versus someone who Bob knows, and ”… that’s how I got my high-status job” is triply sketchy.
I have little idea what this is trying to convey, since
hides the actual meaning of everything that follows. Like, is Bob betting on his actual beliefs exactly matching some profile, or in some direction, or a given range, or on object-level consequences of his beliefs, or what? Me trying to guess at what is meant from the subsequent text wasn’t much use, since it appears to be talking about both beliefs and consequences and seems to be inconsistent with every particular hypothesis I formed in the few minutes I was thinking about it.
On top of that, the escalation totally pinged my scam-detector and so now I’m wondering whether that’s an intentional part of the story or not.
My thoughts on the last line were along the lines of “Alice related that she was behaving like a risk-seeking gullible idiot, and that’s why she got the job? Confusing! It’s more evidence that Alice is actually trying to scam Bob, lying about that as part of the scam to connect the same risky action she wants Bob to take with his daughter’s career goals.”
However that doesn’t seem to be anything that matches up with the post’s title, so doesn’t seem likely to be anything that the author intended to convey.
Here is what I was going for with this post (spoilers). I’m not sure how much of it was clear.
Often times if you ask someone for their belief, they’ll say A. But then if you push them on it, ie. by asking them to bet, they’ll backpedal into belief B instead of belief A (B is often more moderate than A). And then if you push them more, or ask them to bet a larger amount of money, they’ll backpedal again from belief B to belief C.
What is going on here? If the person will start off with belief A but end up with belief F (going from A, to B, to C, to D, to E, to F) after being pushed, maybe we can call belief F they’re “true” belief?
Nah. I don’t think “true” is a good way to label it. Too generic. It kinda begs the question of “what do you mean by true?”. I think “best effort belief” is a pretty good label though. After all, it’s the belief that one would arrive at after making their “best effort”. They type of effort they’d make if, say, $100,000 were on the line.
I also like “best effort” because it can be easily extended to phrases like “3/4 effort belief”, “reactionary belief”, and “extraordinary effort belief”.
This seems like a useful frame to look at things through. Largely because I think it makes it salient that most of our beliefs are not very high-effort. It’d be a cool experiment to try to quantify this. Eg. where you do what Alice did to Bob and see how much the subjects’ confidence changes in response to the bet size.
In some sense, I agree that it does. But not in an important sense.
Imagine that the framing of the conversation was even more abstract. Ie. instead of being about “immigration policy”, it were about “X”. Maybe Alice asks Bob how confident he is in X, he starts off by saying 95%, when offered a bet he backpedals to 90%, then 80%, so on and so forth.
Such a framing would hide something, but the important information (see spoiler above) would be retained.
No, it was not. I tried to portray things as:
Taking place in a bit of a dath ilanian world (President Hanson; Bayeslight; an oddsmaker being a prestigious job).
Alice and Bob are longtime friends.
So then, I don’t think there’s anything sketchy about wanting to bet on your beliefs. I think it’s rational and virtuous.
In our universe though, I agree that it would usually be sketchy, but I could imagine scenarios where it wouldn’t be. For example, if I had been part of a rationalist community for many years, known someone very well like Bob knows Alice, and they wanted to bet with me like this, I wouldn’t necessarily have scam-detector alarms go off. As another example, I get the sense that this sort of thing could happen in certain circles of high-level gamblers with good epistemics.
Yeah, that’s not what I was going for. Here’s what I was going for:
Betting on beliefs is a virtue that is both useful on it’s own, causally related to other good things, and correlated with a bunch of other good things. The person who hired Alice did so for these reasons.
Even in dath ilan, having someone push to bet orders of magnitude more is sketchy. It’s doubly sketchy in the case of Alice’s story where it’s “Some lady I met at this bar” versus someone who Bob knows, and ”… that’s how I got my high-status job” is triply sketchy.
Perhaps. Although it feels to me like it’s a reasonable thing to suspend disbelief on.