Post-Rat hears: “I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great.”
Post-Rat thinks: “You aren’t going to succeed in time / in a manner that will be useful for doing anything that matters in your life.”
Rat hears: “It’s fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good.”
Newcomb’s Problem
Another way I see the difference is that the post-rats look at Newcomb’s problem and say “Those causal rationalist losers! Just one-box! I don’t care what your decision theory says, tell your self whatever story you need in order to just one-box!” The post-rats rally against people who are doing things like two-boxing because “it’s optimal”.
The most indignant rationalists are the one’s who took the effort to create whole new formal decision theories that can one-box, and don’t like that the post-rats think they’d be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds.
Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.
Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls “old-school rationality”. Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don’t have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as “postrationalist” they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.
Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.
The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.
Cool. My back of the mind plan is “Actually read the book, find big names in the top down planning regimes, see if they’ve written stuff” for whenever I want to replace my Descartes stereotype with substance.
Epistemic status: Some babble, help me prune.
My thoughts on the basic divide between rationalist and post-rationalists, lawful thinkers and toolbox thinkers.
Rat thinks: “I’m on board with The Great Reductionist Project, and everything can in theory be formalized.”
Post-Rat hears: “I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great.”
Post-Rat thinks: “You aren’t going to succeed in time / in a manner that will be useful for doing anything that matters in your life.”
Rat hears: “It’s fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good.”
Newcomb’s Problem
Another way I see the difference is that the post-rats look at Newcomb’s problem and say “Those causal rationalist losers! Just one-box! I don’t care what your decision theory says, tell your self whatever story you need in order to just one-box!” The post-rats rally against people who are doing things like two-boxing because “it’s optimal”.
The most indignant rationalists are the one’s who took the effort to create whole new formal decision theories that can one-box, and don’t like that the post-rats think they’d be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds.
Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.
Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls “old-school rationality”. Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don’t have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as “postrationalist” they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.
I’d buy that.
Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.
The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.
Cool. My back of the mind plan is “Actually read the book, find big names in the top down planning regimes, see if they’ve written stuff” for whenever I want to replace my Descartes stereotype with substance.