Thanks for the careful engagement and thoughtful critique of rationality! Strong upvote.
While I disagree with many points I hope it gets a good rebuttal from someone (sadly I haven’t the time).
I only gave a brief skim, but briefly, I think my sense about a number of the points you raise is that they are arguments that humanity’s understanding of rationality is incomplete, but not that further work on it is going down the wrong track. I’m not an expert in physics, but I believe one could criticize many theories of physics for sometimes giving inaccurate predictions of phenomena in the world (e.g. dark matter, quantumn gravity), but they’re still (a) able to help understand a lot of phenomena, (b) better than prior theories, and (c) further physics research seems like the right thing to do. I think I have a similar sense when you discuss ways in which humans don’t appear to have static utility functions and probability distributions over world-states – treating me and others as though this is approximately true has (I believe) helped me make a lot of decisions better (“Shut up and multiply!”), and I expect that the underlying reality will end up thinking agreeing with this model in many places while substantially improving our understanding in many others.
Hey, thanks for responding! Re the physics analogy, I agree that improvements in our heuristics are a good thing:
However, perhaps you have already begun to anticipate what I will say—the benefit of heuristics is that they acknowledge (and are indeed dependent) on the presence of context. Unlike a “hard” theory, which must be applicable to all cases equally and fails in the event a single counter-example can be found, a “soft” heuristic is triggered only when the conditions are right: we do not use our “judge popular songs” heuristic when staring at a dinner menu.
It is precisely this contextual awareness that allows heuristics to evade the problems of naive probabilistic world-modelling, which lead to such inductive conclusions as the Turkey Illusion. This means that we avoid the pitfalls of treating spaghetti like a Taylor Swift song, and it also means (slightly more seriously) that we do not treat discussions with our parents like bargaining games to extract maximum expected value. Engineers and physicists employ Newton’s laws of motion not because they are universal laws, but because they are useful heuristics about how things move in our daily lives (i.e. when they are not moving at near light speed). Heuristics are what Chris Haufe called “techniques” in the last section: what we worry about is not their truthfulness, but their usefulness.
However, I disagree in that I don’t think we’re really moving towards some endpoint of “the underlying reality will end up agreeing with this model in many places while substantially improving our understanding in many others”. Both because of the chaotic nature of the universe (which I strongly believe puts an upper bound on how well we can model systems without just doing atom by atom simulation to arbitrary precision) and because that’s not how physics works in practice today. We have a pretty strong model for how macroscale physics works (General Relativity), but we willingly “drop it” for less accurate heuristics like Newtonian mechanics when it’s more convenient/useful. Similarly, even if we understand the fundamentals of neuroscience completely, we may “drop it” for more heuristics driven approaches that are less absolutely accurate.
Because of this, I maintain my questioning of a general epistemic (and the attached instrumental) project for “rational living” etc.. It seems to me a better model of how we deal with things is like collecting tools for a toolbox, swapping them out for better ones as better ones come in, rather than moving towards some ideal perfect system of thinking. Perhaps that too is a form of rationalism, but at that point it’s a pretty loose thing and most life philosophies can be called rationalisms of a sort...
(Note: On the other hand it seems pretty true that better heuristics are linked to better understandings of the world however they arise, so I remain strongly in support of the scientific community and the scientific endeavour. Maybe this is a self-contradiction!)
even if we understand the fundamentals of neuroscience completely, we may “drop it” for more heuristics driven approaches that are less absolutely accurate.
Because of this, I maintain my questioning of a general epistemic (and the attached instrumental) project for “rational living” etc.. It seems to me a better model of how we deal with things is like collecting tools for a toolbox, swapping them out for better ones as better ones come in, rather than moving towards some ideal perfect system of thinking. Perhaps
Relatedly, in case you haven’t seen it, I’ll point you to an essay by Eliezer on this distinction: Toolbox-thinking and Law-thinking.
It’s definitely something I hadn’t read before, so thank you. I would say to that article (on a skim) that it has clarified my thinking somewhat. I therefore question the law/toolbox dichotomy, since to me it seems that usefulness—accuracy-to-perceived reality are in fact two different axes. Thus you could imagine:
A useful-and-inaccurate belief (e.g. what we call old wives tales, “red sky in morning, sailors take warning”, herbal remedies that have medical properties but not because of what the “theory” dictates)
A not-useful-but-accurate belief (when I pitch this baseball, the velocity is dependent on the space-time distortion created by earth’s gravity well)
A not-useful-and-not-accurate belief (bloodletting as a medical “treatment”)
And finally a useful-and-accurate belief (when I set up GPS satellites I should take into account time dilation)
And, of course, all of these are context dependent (sometimes you may be thinking about baseballs going at lightspeed)! I guess then my position is refined into: “category 4 is great if we can get it but for most cases category 1 is probably easier/better”, which seems neither pure toolbox or pure law
I’m not sure I understand how “red sky in morning, sailors take warning” can be both inaccurate and useful. Surely a heuristic for when to prepare for bad weather is useful only insofar as it is accurate?
Because it comes boxed with an inaccurate causal story.
For example, if I tell you not to lie because god is watching you and will send you to hell, this is somewhat useful because you’ll become more trustworthy if you believe it, but the claim about god and hell is false.
Maybe I am missing some previous rationalist discourse about the red sky saying. I remember reading it in books as a child, and do not know (except that it is listed here as a useful heuristic) whether it is actually true, or what the bundled incorrect causal story is. I have always interpreted it as “a red sunrise is correlated with a higher chance of storms at sea.” That claim does not entail any particular causal mechanism, and it still seems to me that it must be either accurate and therefore useful, or inaccurate and therefore not useful, but it’s hard to imagine how it could be inaccurate and useful.
Thanks for the careful engagement and thoughtful critique of rationality! Strong upvote.
While I disagree with many points I hope it gets a good rebuttal from someone (sadly I haven’t the time).
I only gave a brief skim, but briefly, I think my sense about a number of the points you raise is that they are arguments that humanity’s understanding of rationality is incomplete, but not that further work on it is going down the wrong track. I’m not an expert in physics, but I believe one could criticize many theories of physics for sometimes giving inaccurate predictions of phenomena in the world (e.g. dark matter, quantumn gravity), but they’re still (a) able to help understand a lot of phenomena, (b) better than prior theories, and (c) further physics research seems like the right thing to do. I think I have a similar sense when you discuss ways in which humans don’t appear to have static utility functions and probability distributions over world-states – treating me and others as though this is approximately true has (I believe) helped me make a lot of decisions better (“Shut up and multiply!”), and I expect that the underlying reality will end up thinking agreeing with this model in many places while substantially improving our understanding in many others.
Hey, thanks for responding! Re the physics analogy, I agree that improvements in our heuristics are a good thing:
However, I disagree in that I don’t think we’re really moving towards some endpoint of “the underlying reality will end up agreeing with this model in many places while substantially improving our understanding in many others”. Both because of the chaotic nature of the universe (which I strongly believe puts an upper bound on how well we can model systems without just doing atom by atom simulation to arbitrary precision) and because that’s not how physics works in practice today. We have a pretty strong model for how macroscale physics works (General Relativity), but we willingly “drop it” for less accurate heuristics like Newtonian mechanics when it’s more convenient/useful. Similarly, even if we understand the fundamentals of neuroscience completely, we may “drop it” for more heuristics driven approaches that are less absolutely accurate.
Because of this, I maintain my questioning of a general epistemic (and the attached instrumental) project for “rational living” etc.. It seems to me a better model of how we deal with things is like collecting tools for a toolbox, swapping them out for better ones as better ones come in, rather than moving towards some ideal perfect system of thinking. Perhaps that too is a form of rationalism, but at that point it’s a pretty loose thing and most life philosophies can be called rationalisms of a sort...
(Note: On the other hand it seems pretty true that better heuristics are linked to better understandings of the world however they arise, so I remain strongly in support of the scientific community and the scientific endeavour. Maybe this is a self-contradiction!)
Relatedly, in case you haven’t seen it, I’ll point you to an essay by Eliezer on this distinction: Toolbox-thinking and Law-thinking.
It’s definitely something I hadn’t read before, so thank you. I would say to that article (on a skim) that it has clarified my thinking somewhat. I therefore question the law/toolbox dichotomy, since to me it seems that usefulness—accuracy-to-perceived reality are in fact two different axes. Thus you could imagine:
A useful-and-inaccurate belief (e.g. what we call old wives tales, “red sky in morning, sailors take warning”, herbal remedies that have medical properties but not because of what the “theory” dictates)
A not-useful-but-accurate belief (when I pitch this baseball, the velocity is dependent on the space-time distortion created by earth’s gravity well)
A not-useful-and-not-accurate belief (bloodletting as a medical “treatment”)
And finally a useful-and-accurate belief (when I set up GPS satellites I should take into account time dilation)
And, of course, all of these are context dependent (sometimes you may be thinking about baseballs going at lightspeed)! I guess then my position is refined into: “category 4 is great if we can get it but for most cases category 1 is probably easier/better”, which seems neither pure toolbox or pure law
I’m not sure I understand how “red sky in morning, sailors take warning” can be both inaccurate and useful. Surely a heuristic for when to prepare for bad weather is useful only insofar as it is accurate?
Because it comes boxed with an inaccurate causal story.
For example, if I tell you not to lie because god is watching you and will send you to hell, this is somewhat useful because you’ll become more trustworthy if you believe it, but the claim about god and hell is false.
Maybe I am missing some previous rationalist discourse about the red sky saying. I remember reading it in books as a child, and do not know (except that it is listed here as a useful heuristic) whether it is actually true, or what the bundled incorrect causal story is. I have always interpreted it as “a red sunrise is correlated with a higher chance of storms at sea.” That claim does not entail any particular causal mechanism, and it still seems to me that it must be either accurate and therefore useful, or inaccurate and therefore not useful, but it’s hard to imagine how it could be inaccurate and useful.