I did all the exercises above. Here’s what I wrote down during the timed sections. (It’s a stream of consciousness account, it may not be very clear/understandable.)
How would you generalize the common problem in the above arguments? You have 2 minutes
The structure of the reasoning does not necessarily correlate with one outcome more than others. You say A because X, but I can argue that B because X.
But I’m confused, because I can do this for any argument that’s not maximally well-specified though. Like, there’s always a gotcha. If I argue for the structure of genetics due to the pattern of children born with certain features, I could also use that evidence combined with an anti-inductive prior to argue the opposite. I’m not quite sure what the reason is that some things feel like they prove too much and some don’t. I suppose it’s just “in the context of my actual understanding of the situation, do I feel like this argument pins down a world-state positively correlated with the belief or not?” and if it doesn’t, then I can neatly express this by showing it can prove anything, because it’s not actually real evidence.
Oh huh, maybe that’s wrong. It’s not that it isn’t evidence for anything, it’s that if it were evidence for this it would be evidence for many inconsistent things. (Though I think those two are the same.)
What algorithm were you running when you solved the above problems? Is there a more ideal/general algorithm? You have 3 minutes.
Hmm, I did like the thing that happened actually. Normally in such a disagreement with a person, I would explain the structure of my beliefs around the thing they called a ‘reason’. I’d do lots of interpretive work like that. “Let me explain the process by which smart people get their beliefs and when those processes are/aren’t truth-tracking” or “Let me explain what heuristics help predict whether a startups is successful” or “let me explain what p-hacking is”. But in all of them the new mental motion was much cleaner/cheaper, which was producing a small impossibility proof.
I think I normally avoid such proofs because they’re non-constructive—they don’t tell you where the mistake was or how that part of the world works, and I’m often worried this will feel like a demotivating thing or conversation killer for the other person I’m talking with. But I think it’s worth thinking this way for myself more. I do want to practice it, certainly. I should be able to use all tools of proof and disproof, not just those that make conversations go smoothly.
Some general thoughts
I found doing the exercises very enjoyable.
I think that the answers here could’ve been more to-a-format. These aren’t very open-ended questions, and I think that if I’d practiced matching a format that would’ve drilled a more specific tool better. But not clear that’s appropriate.
I didn’t like how all the examples were of the “don’t believe a dumb low-status thing”. Like I think people often build epistemologies around making sure to never be religious, endorse a failed startup idea, or believe homeopathy, but I think that you should mostly build it around making sure you will make successful insights in physics, or building a successful startup, which is a different frame. I would’ve liked much more difficult examples in areas where it’s not clear what the right choice is purely based on pattern-matching to low-status beliefs.
The post tells people to sit by a clock. I think at the start I would’ve told people to find a timer by googling ‘timer’ (when you do that, one just appears on google) else I expect most folks to have bounced off and not done those exercises.
I really liked the ‘reflect on the general technique’ sections, they were excellent and well-placed.
Wow, this is exactly the type of feedback I wanted, thank you!
I’ve changed my view on this, and my current model is the frame “I can prove anything in the set A because of reason X”
Like I can prove a certain set of facts about Natural numbers using induction, but to claim that induction proves all things about Real numbers or morality or… is proving too much.
I would rewrite the post to focus on questions regarding that such as:
What set of claims do you think reason X proves?
How do you know that reason X proves those types of claims?
(And of course figure out how to phrase these things more tactfully)
Also, I’ve also enjoyed Thinking Physics and TurnTrout’s AU sequence type questions over my “pattern match to low-status belief” ones (I do like my generalization and algorithm question though), so I think I understand your point there.
I did all the exercises above. Here’s what I wrote down during the timed sections. (It’s a stream of consciousness account, it may not be very clear/understandable.)
How would you generalize the common problem in the above arguments? You have 2 minutes
The structure of the reasoning does not necessarily correlate with one outcome more than others. You say A because X, but I can argue that B because X.
But I’m confused, because I can do this for any argument that’s not maximally well-specified though. Like, there’s always a gotcha. If I argue for the structure of genetics due to the pattern of children born with certain features, I could also use that evidence combined with an anti-inductive prior to argue the opposite. I’m not quite sure what the reason is that some things feel like they prove too much and some don’t. I suppose it’s just “in the context of my actual understanding of the situation, do I feel like this argument pins down a world-state positively correlated with the belief or not?” and if it doesn’t, then I can neatly express this by showing it can prove anything, because it’s not actually real evidence.
Oh huh, maybe that’s wrong. It’s not that it isn’t evidence for anything, it’s that if it were evidence for this it would be evidence for many inconsistent things. (Though I think those two are the same.)
What algorithm were you running when you solved the above problems? Is there a more ideal/general algorithm? You have 3 minutes.
Hmm, I did like the thing that happened actually. Normally in such a disagreement with a person, I would explain the structure of my beliefs around the thing they called a ‘reason’. I’d do lots of interpretive work like that. “Let me explain the process by which smart people get their beliefs and when those processes are/aren’t truth-tracking” or “Let me explain what heuristics help predict whether a startups is successful” or “let me explain what p-hacking is”. But in all of them the new mental motion was much cleaner/cheaper, which was producing a small impossibility proof.
I think I normally avoid such proofs because they’re non-constructive—they don’t tell you where the mistake was or how that part of the world works, and I’m often worried this will feel like a demotivating thing or conversation killer for the other person I’m talking with. But I think it’s worth thinking this way for myself more. I do want to practice it, certainly. I should be able to use all tools of proof and disproof, not just those that make conversations go smoothly.
Some general thoughts
I found doing the exercises very enjoyable.
I think that the answers here could’ve been more to-a-format. These aren’t very open-ended questions, and I think that if I’d practiced matching a format that would’ve drilled a more specific tool better. But not clear that’s appropriate.
I didn’t like how all the examples were of the “don’t believe a dumb low-status thing”. Like I think people often build epistemologies around making sure to never be religious, endorse a failed startup idea, or believe homeopathy, but I think that you should mostly build it around making sure you will make successful insights in physics, or building a successful startup, which is a different frame. I would’ve liked much more difficult examples in areas where it’s not clear what the right choice is purely based on pattern-matching to low-status beliefs.
The post tells people to sit by a clock. I think at the start I would’ve told people to find a timer by googling ‘timer’ (when you do that, one just appears on google) else I expect most folks to have bounced off and not done those exercises.
I really liked the ‘reflect on the general technique’ sections, they were excellent and well-placed.
Wow, this is exactly the type of feedback I wanted, thank you!
I’ve changed my view on this, and my current model is the frame “I can prove anything in the set A because of reason X”
Like I can prove a certain set of facts about Natural numbers using induction, but to claim that induction proves all things about Real numbers or morality or… is proving too much.
I would rewrite the post to focus on questions regarding that such as:
What set of claims do you think reason X proves?
How do you know that reason X proves those types of claims? (And of course figure out how to phrase these things more tactfully)
Also, I’ve also enjoyed Thinking Physics and TurnTrout’s AU sequence type questions over my “pattern match to low-status belief” ones (I do like my generalization and algorithm question though), so I think I understand your point there.