From Tetlocks Superforcasting work we know that commitment to one detailed model makes you worse at the kind of Bayesian reasoning that Superforcasting is about.
It’s also not like we completely got rid of Bayesian epistimology. We still do a lot of things like betting that come from that frame, but generally LessWrong is open to reasoning in a lot of different ways.
There the textbook definition of rational thinking from Baron’s Thinking and deciding:
The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.”
When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.
I do think that’s the current spirit of LessWrong and there’s a diversity of ways to think that get used within LessWrong.
From Tetlocks Superforcasting work we know that commitment to one detailed model makes you worse at the kind of Bayesian reasoning that Superforcasting is about.
I think one great talk about the difference is Peter Thiel: You Are Not a Lottery Ticket | Interactive 2013 | SXSW. In the Bayesian frame everything is lottery tickets.
It’s also not like we completely got rid of Bayesian epistimology. We still do a lot of things like betting that come from that frame, but generally LessWrong is open to reasoning in a lot of different ways.
There the textbook definition of rational thinking from Baron’s Thinking and deciding:
I do think that’s the current spirit of LessWrong and there’s a diversity of ways to think that get used within LessWrong.