A forming thought on post-rationality. I’ve been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:
State driven rational planning (episteme) destroys local knowledge (metis), often resulting in metrics getting better, yet life getting worse, and it’s impossible to complain about this in a language the state understands.
The quip that most readily comes to mind is “well if rationality is about winning, it sounds like the state isn’t being very rational, and this isn’t a fair attack on rationality itself” (this comment quotes a similar argument).
Similarly, I was having a conversation with two friends once. Person A expressed that they were worried if they started hanging around more EA’s and rationalists, they might end up having a super boring optimized life and never do fun things like cook meals with friends (because soylent) or go dancing. Friend B expressed, “I dunno, that sounds pretty optimal to me.”
I don’t think friend A was legitimately worried about the general concept of optimization. I do think they were worried about what they expected there implementation (or their friends implementation) of “optimality” in their own lives.
Current most charitable claim I have of the post-rationalist mindset: the best and most technical specifications that we have for what things like optimal/truth/rational might look like contain very little information about what to actually do. In your pursuit of “truth”/”rationality”/”the optimal” as it pertains to your life, you will be making up most of your art along the way, not deriving it from first principles. Furthermore, thinking in terms of the truth/rationality/optimality will [somehow] lead you to make important errors you wouldn’t have made otherwise.
A more blase version of what I think the post rationalist mindset is: you can’t handle the (concept of the) truth.
Post-Rat hears: “I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great.”
Post-Rat thinks: “You aren’t going to succeed in time / in a manner that will be useful for doing anything that matters in your life.”
Rat hears: “It’s fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good.”
Newcomb’s Problem
Another way I see the difference is that the post-rats look at Newcomb’s problem and say “Those causal rationalist losers! Just one-box! I don’t care what your decision theory says, tell your self whatever story you need in order to just one-box!” The post-rats rally against people who are doing things like two-boxing because “it’s optimal”.
The most indignant rationalists are the one’s who took the effort to create whole new formal decision theories that can one-box, and don’t like that the post-rats think they’d be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds.
Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.
Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls “old-school rationality”. Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don’t have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as “postrationalist” they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.
Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.
The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.
Cool. My back of the mind plan is “Actually read the book, find big names in the top down planning regimes, see if they’ve written stuff” for whenever I want to replace my Descartes stereotype with substance.
A forming thought on post-rationality. I’ve been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:
The quip that most readily comes to mind is “well if rationality is about winning, it sounds like the state isn’t being very rational, and this isn’t a fair attack on rationality itself” (this comment quotes a similar argument).
Similarly, I was having a conversation with two friends once. Person A expressed that they were worried if they started hanging around more EA’s and rationalists, they might end up having a super boring optimized life and never do fun things like cook meals with friends (because soylent) or go dancing. Friend B expressed, “I dunno, that sounds pretty optimal to me.”
I don’t think friend A was legitimately worried about the general concept of optimization. I do think they were worried about what they expected there implementation (or their friends implementation) of “optimality” in their own lives.
Current most charitable claim I have of the post-rationalist mindset: the best and most technical specifications that we have for what things like optimal/truth/rational might look like contain very little information about what to actually do. In your pursuit of “truth”/”rationality”/”the optimal” as it pertains to your life, you will be making up most of your art along the way, not deriving it from first principles. Furthermore, thinking in terms of the truth/rationality/optimality will [somehow] lead you to make important errors you wouldn’t have made otherwise.
A more blase version of what I think the post rationalist mindset is: you can’t handle the (concept of the) truth.
Epistemic status: Some babble, help me prune.
My thoughts on the basic divide between rationalist and post-rationalists, lawful thinkers and toolbox thinkers.
Rat thinks: “I’m on board with The Great Reductionist Project, and everything can in theory be formalized.”
Post-Rat hears: “I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great.”
Post-Rat thinks: “You aren’t going to succeed in time / in a manner that will be useful for doing anything that matters in your life.”
Rat hears: “It’s fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good.”
Newcomb’s Problem
Another way I see the difference is that the post-rats look at Newcomb’s problem and say “Those causal rationalist losers! Just one-box! I don’t care what your decision theory says, tell your self whatever story you need in order to just one-box!” The post-rats rally against people who are doing things like two-boxing because “it’s optimal”.
The most indignant rationalists are the one’s who took the effort to create whole new formal decision theories that can one-box, and don’t like that the post-rats think they’d be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds.
Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.
Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls “old-school rationality”. Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don’t have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as “postrationalist” they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.
I’d buy that.
Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.
The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.
Cool. My back of the mind plan is “Actually read the book, find big names in the top down planning regimes, see if they’ve written stuff” for whenever I want to replace my Descartes stereotype with substance.