Thanks for the quick reply! I’ll check those posts out.
From what I’ve seen (which, again, isn’t much), gears-level reasoning just involves comprehensively investigating a model. I’m sure I’m misunderstanding it, especially if it’s now the basis for most of the posts around here. Could you enlighten me?
Gears reasoning means that you have a model of how the parts of a system interact and reason based on the model.
Given that reality is usually very complicated that means that you are operating on a model that’s a simplication of reality. In Tedlocks distinction: Good Bayesian reasoning is foxy. It’s about not committing to any single model but having multiple and weighting between them.
On the other hand if you are doing gear’s style reasoning you are usually acting as a hedgehog that treats one model to be the source of reliable information.
Hmm… I’m probably being thick, but it sounds like gears-based reasoning is just commitment to a detailed model. That wouldn’t help you design the model, among other things.
I may need to investigate this on my own; I don’t want to tangle you in a comment thread explaining something over and over again.
From Tetlocks Superforcasting work we know that commitment to one detailed model makes you worse at the kind of Bayesian reasoning that Superforcasting is about.
It’s also not like we completely got rid of Bayesian epistimology. We still do a lot of things like betting that come from that frame, but generally LessWrong is open to reasoning in a lot of different ways.
There the textbook definition of rational thinking from Baron’s Thinking and deciding:
The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.”
When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.
I do think that’s the current spirit of LessWrong and there’s a diversity of ways to think that get used within LessWrong.
Thanks for the quick reply! I’ll check those posts out.
From what I’ve seen (which, again, isn’t much), gears-level reasoning just involves comprehensively investigating a model. I’m sure I’m misunderstanding it, especially if it’s now the basis for most of the posts around here. Could you enlighten me?
Gears reasoning means that you have a model of how the parts of a system interact and reason based on the model.
Given that reality is usually very complicated that means that you are operating on a model that’s a simplication of reality. In Tedlocks distinction: Good Bayesian reasoning is foxy. It’s about not committing to any single model but having multiple and weighting between them.
On the other hand if you are doing gear’s style reasoning you are usually acting as a hedgehog that treats one model to be the source of reliable information.
Hmm… I’m probably being thick, but it sounds like gears-based reasoning is just commitment to a detailed model. That wouldn’t help you design the model, among other things.
I may need to investigate this on my own; I don’t want to tangle you in a comment thread explaining something over and over again.
From Tetlocks Superforcasting work we know that commitment to one detailed model makes you worse at the kind of Bayesian reasoning that Superforcasting is about.
I think one great talk about the difference is Peter Thiel: You Are Not a Lottery Ticket | Interactive 2013 | SXSW. In the Bayesian frame everything is lottery tickets.
It’s also not like we completely got rid of Bayesian epistimology. We still do a lot of things like betting that come from that frame, but generally LessWrong is open to reasoning in a lot of different ways.
There the textbook definition of rational thinking from Baron’s Thinking and deciding:
I do think that’s the current spirit of LessWrong and there’s a diversity of ways to think that get used within LessWrong.