EDIT: upon reflection the first thing I should do is probably to ask you for a bunch of the best examples of the thing you’re talking about throughout history. I.e. insofar as the world is better than it could be (or worse than it could be) at what points did careful philosophical reasoning (or the lack of it) make the biggest difference?
Original comment:
The term “careful thinking” here seems to be doing a lot of work, and I’m worried that there’s a kind of motte and bailey going on. In your earlier comment you describe it as “analytical philosophy, or more broadly careful/skeptical philosophy”. But I think we agree that most academic analytic philosophy is bad, and often worse than laypeople’s intuitive priors (in part due to strong selection effects on who enters the field—most philosophers of religion believe in god, most philosophers of aesthetics believe in the objectivity of aesthetics, etc).
So then we can fall back on LessWrong as an example of careful thinking. But as we discussed above, even the leading figure on LessWrong was insufficiently careful even about the main focus of his work for it to be robustly valuable.
So I basically get the sense that the role of careful thinking in your worldview is something like “the thing that I, Wei Dai, ascribe my success to”. And I do agree that you’ve been very successful in a bunch of intellectual endeavours. But I expect that your “secret sauce” is a confluence of a bunch of factors (including IQ, emotional temperament, background knowledge, etc) only one of which was “being in a community that prioritized careful thinking”. And then I also think you’re missing a bunch of other secret sauces that would make your impact on the world better (like more ability to export your ideas to other people).
In other words, the bailey seems to be “careful thinking is the thing we should prioritize in order to make the world better”, and the motte is “I, Wei Dai, seem to be doing something good, even if basically everyone else is falling into the valley of bad rationality”.
One reason I’m personally pushing back on this, btw, is that my own self-narrative for why I’m able to be intellectually productive in significant part relies on me being less intellectually careful than other people—so that I’m willing to throw out a bunch of ideas that are half-formed and non-rigorous, iterate, and eventually get to the better ones. Similarly, a lot of the value that the wider blogosphere has created comes from people being less careful than existing academic norms (including Eliezer and Scott Alexander, whose best works are often quite polemic).
In short: I totally think we want more people coming up with good ideas, and that this is a big bottleneck. But there are many different directions in which we should tug people in order to make them more intellectually productive. Many academics should be less careful. Many people on LessWrong should be more careful. Some scientists should be less empirical, others should be more empirical; some less mathematically rigorous, others more mathematically rigorous. Others should try to live in countries that are less repressive of new potentially-crazy ideas (hence politics being important). And then, of course, others should be figuring out how to actually get good ideas implemented.
Meanwhile, Eliezer and Sam and Elon should have had less of a burning desire to found an AGI lab. I agree that this can be described by “wanting to be the hero who saves the world”, but this seems to function as a curiosity stopper for you. When I talk about emotional health a lot of what I mean is finding ways to become less status-oriented (or, in your own words, “not being distracted/influenced by competing motivations”). I think of extremely strong motivations to change the world (as these outlier figures have) as typically driven by some kind of core emotional dysregulation. And specifically I think of fear-based motivation as the underlying phenomenon which implements status-seeking and many other behaviors which are harmful when taken too far. (This is not an attempt to replace evo-psych, btw—it’s an account of the implementation mechanisms that evolution used to get us to do the things it wanted, which now are sometimes maladapted to our current environment.) I write about a bunch of these models in my Replacing Fear sequence.
When I talk about emotional health a lot of what I mean is finding ways to become less status-oriented (or, in your own words, “not being distracted/influenced by competing motivations”).
To clarify this as well, when I said (or implied) that Eliezer was “distracted/influenced by competing motivations” I didn’t mean that he was too status-oriented (I think I’m probably just as status-oriented as him), but rather that he wasn’t just playing the status game which rewards careful philosophical reasoning, but also a game that rewards being heroic and saving (or appearing/attempting to save) the world.
I’ve now read/skimmed your Replacing Fear sequence, but I’m pretty skeptical that becoming less status-oriented is both possible and a good idea. It seems like the only example you gave in the sequence is yourself, and you didn’t really talk about whether/how you became less status-oriented? (E.g., can this be observed externally?) And making a lot of people care less about status could have negative unintentional consequences, as people being concerned about status seems to be a major pillar of how human morality currently works and how our society is held together.
upon reflection the first thing I should do is probably to ask you for a bunch of the best examples of the thing you’re talking about throughout history. I.e. insofar as the world is better than it could be (or worse than it could be) at what points did careful philosophical reasoning (or the lack of it) make the biggest difference?
World worse than it could be:
social darwinism
various revolutions driven by flawed ideologies, e.g., Sun Yat-sen’s attempt to switch China from a monarchy to a democratic republic overnight with virtually no cultural/educational foundation or preparation, leading to governance failures and later communist takeover (see below for a more detailed explanation of this)
AI labs trying to save the world by racing with each other
World better than it could be:
invention/propagation of the concept of naturalistic fallacy, tempering a lot of bad moral philosophies
moral/normative uncertainty and complexity of value being fairly well known, including among AI researchers, such that we rarely see proposals to imbue AI with the one true morality nowadays
<details>
The Enlightenment’s Flawed Reasoning and its Negative Consequences (written by Gemini 2.5 Pro under my direction)
While often lauded, the Enlightenment shouldn’t automatically be classified as a triumph of “careful philosophical reasoning,” particularly concerning its foundational concept of “natural rights.” The core argument against its “carefulness” rests on several points:
Philosophically “Hand-Wavy” Concept of Natural Rights: The idea that rights are “natural,” “self-evident,” or inherent in a “state of nature” lacks rigorous philosophical grounding. Attempts to justify them relied on vague appeals to God, an ill-defined “Nature,” or intuition, rather than robust, universally compelling reasoning. It avoids the hard work of justifying why certain entitlements should exist and be protected, famously leading critics like Bentham to dismiss them as “nonsense upon stilts.”
Superficial Understanding Leading to Flawed Implementation: This lack of careful philosophical grounding wasn’t just an academic issue. It fostered a potentially superficial understanding of what rights are and what is required to make them real. Instead of seeing rights as complex, practical social and political achievements that require deep institutional infrastructure (rule of law, independent courts, enforcement mechanisms) and specific cultural norms (tolerance, civic virtue, respect for process), the “natural rights” framing could suggest they merely need to be declared or recognized to exist.
Case Study: China’s Premature Turn to Democracy: The negative consequences of this superficial understanding can be illustrated by the attempt to rapidly transition China from monarchy to a democratic republic in the early 20th century.
Influenced by Enlightenment ideals, reformers and revolutionaries like Sun Yat-sen adopted the forms of Western republicanism and rights-based governance.
However, the prevailing ideology, arguably built on this less-than-careful philosophy, underestimated the immense practical difficulty and the necessary prerequisites for such a system to function, especially in China’s context.
If Chinese intellectuals and leaders had instead operated from a more careful, practical philosophical understanding – viewing rights not as “natural” but as outcomes needing to be carefully constructed and secured through institutions and cultural development – they might have pursued different strategies.
Specifically, they might have favored gradualism, supporting constitutional reforms under the weakening Qing dynasty or working with reform-minded officials and strongmen like Yuan Shikai to build the necessary political and cultural infrastructure over time. This could have involved strengthening proto-parliamentary bodies, legal systems, and civic education incrementally.
Instead, the revolutionary fervor, fueled in part by the appealing but ultimately less “careful” ideology of inherent rights and immediate republicanism, pushed for a radical break. This premature adoption of democratic forms without the functional substance contributed significantly to the collapse of central authority, the chaos of the Warlord Era, and ultimately created conditions ripe for the rise of the Communist Party, leading the country down a very different and tragic path.
In Conclusion: This perspective argues that the Enlightenment, despite its positive contributions, contained significant philosophical weaknesses, particularly in its conception of rights. This lack of “carefulness” wasn’t benign; it fostered an incomplete understanding that, when adopted by influential actors facing complex political realities like those in early 20th-century China, contributed to disastrous strategic choices and ultimately made the world worse than it might have been had a more pragmatically grounded philosophy prevailed. It underscores how the quality and depth of philosophical reasoning can have profound real-world consequences.
</details>
So I basically get the sense that the role of careful thinking in your worldview is something like “the thing that I, Wei Dai, ascribe my success to”. And I do agree that you’ve been very successful in a bunch of intellectual endeavours. But I expect that your “secret sauce” is a confluence of a bunch of factors (including IQ, emotional temperament, background knowledge, etc) only one of which was “being in a community that prioritized careful thinking”.
This seems fair, and I guess from this perspective my response is that I’m not sure how to intervene on the other factors (aside from enhancing human IQ, which I do support). It seems like your view is that emotional temperament is also a good place to intervene? If so, perhaps I should read your posts with this in mind. (I previously didn’t see how the Replacing Fear sequence was relevant to my concerns, and mostly skipped it.)
And then I also think you’re missing a bunch of other secret sauces that would make your impact on the world better (like more ability to export your ideas to other people).
I’m actually reluctant to export my ideas to more people, especially those who don’t care as much about careful reasoning (which unfortunately is almost everyone), as I don’t want to be responsible for people misusing my ideas, e.g., overconfidently putting them into practice or extending them in wrong directions.
However I’m trying to practice some skills related to exporting ideas (such as talking to people in real time and participating on X) in case it does seem to be a good idea one day. Would be interested to hear more about what other secret sauces related to this I might be missing. (I guess public speaking is another one, but the cost of practicing that one is too high for me.)
One reason I’m personally pushing back on this, btw, is that my own self-narrative for why I’m able to be intellectually productive in significant part relies on me being less intellectually careful than other people—so that I’m willing to throw out a bunch of ideas that are half-formed and non-rigorous, iterate, and eventually get to the better ones.
To be clear, I think this is totally fine, as long as you take care to not be or appear too confident about these half-formed ideas, and take precautions against other people taking your ideas more seriously than they should (such as by monitoring subsequent discussions and weighing in against other people’s over-enthusiasm). I think “careful thinking” can and should be a social activity, which would necessitate communicating half-formed ideas during the collaborative process. I’ve done this myself plenty of times, such as in my initial UDT post, which was very informal and failed to anticipate many subsequently discovered problems, so I’m rather surprised that you think I would be against this.
EDIT: upon reflection the first thing I should do is probably to ask you for a bunch of the best examples of the thing you’re talking about throughout history. I.e. insofar as the world is better than it could be (or worse than it could be) at what points did careful philosophical reasoning (or the lack of it) make the biggest difference?
Original comment:
The term “careful thinking” here seems to be doing a lot of work, and I’m worried that there’s a kind of motte and bailey going on. In your earlier comment you describe it as “analytical philosophy, or more broadly careful/skeptical philosophy”. But I think we agree that most academic analytic philosophy is bad, and often worse than laypeople’s intuitive priors (in part due to strong selection effects on who enters the field—most philosophers of religion believe in god, most philosophers of aesthetics believe in the objectivity of aesthetics, etc).
So then we can fall back on LessWrong as an example of careful thinking. But as we discussed above, even the leading figure on LessWrong was insufficiently careful even about the main focus of his work for it to be robustly valuable.
So I basically get the sense that the role of careful thinking in your worldview is something like “the thing that I, Wei Dai, ascribe my success to”. And I do agree that you’ve been very successful in a bunch of intellectual endeavours. But I expect that your “secret sauce” is a confluence of a bunch of factors (including IQ, emotional temperament, background knowledge, etc) only one of which was “being in a community that prioritized careful thinking”. And then I also think you’re missing a bunch of other secret sauces that would make your impact on the world better (like more ability to export your ideas to other people).
In other words, the bailey seems to be “careful thinking is the thing we should prioritize in order to make the world better”, and the motte is “I, Wei Dai, seem to be doing something good, even if basically everyone else is falling into the valley of bad rationality”.
One reason I’m personally pushing back on this, btw, is that my own self-narrative for why I’m able to be intellectually productive in significant part relies on me being less intellectually careful than other people—so that I’m willing to throw out a bunch of ideas that are half-formed and non-rigorous, iterate, and eventually get to the better ones. Similarly, a lot of the value that the wider blogosphere has created comes from people being less careful than existing academic norms (including Eliezer and Scott Alexander, whose best works are often quite polemic).
In short: I totally think we want more people coming up with good ideas, and that this is a big bottleneck. But there are many different directions in which we should tug people in order to make them more intellectually productive. Many academics should be less careful. Many people on LessWrong should be more careful. Some scientists should be less empirical, others should be more empirical; some less mathematically rigorous, others more mathematically rigorous. Others should try to live in countries that are less repressive of new potentially-crazy ideas (hence politics being important). And then, of course, others should be figuring out how to actually get good ideas implemented.
Meanwhile, Eliezer and Sam and Elon should have had less of a burning desire to found an AGI lab. I agree that this can be described by “wanting to be the hero who saves the world”, but this seems to function as a curiosity stopper for you. When I talk about emotional health a lot of what I mean is finding ways to become less status-oriented (or, in your own words, “not being distracted/influenced by competing motivations”). I think of extremely strong motivations to change the world (as these outlier figures have) as typically driven by some kind of core emotional dysregulation. And specifically I think of fear-based motivation as the underlying phenomenon which implements status-seeking and many other behaviors which are harmful when taken too far. (This is not an attempt to replace evo-psych, btw—it’s an account of the implementation mechanisms that evolution used to get us to do the things it wanted, which now are sometimes maladapted to our current environment.) I write about a bunch of these models in my Replacing Fear sequence.
To clarify this as well, when I said (or implied) that Eliezer was “distracted/influenced by competing motivations” I didn’t mean that he was too status-oriented (I think I’m probably just as status-oriented as him), but rather that he wasn’t just playing the status game which rewards careful philosophical reasoning, but also a game that rewards being heroic and saving (or appearing/attempting to save) the world.
I’ve now read/skimmed your Replacing Fear sequence, but I’m pretty skeptical that becoming less status-oriented is both possible and a good idea. It seems like the only example you gave in the sequence is yourself, and you didn’t really talk about whether/how you became less status-oriented? (E.g., can this be observed externally?) And making a lot of people care less about status could have negative unintentional consequences, as people being concerned about status seems to be a major pillar of how human morality currently works and how our society is held together.
World worse than it could be:
social darwinism
various revolutions driven by flawed ideologies, e.g., Sun Yat-sen’s attempt to switch China from a monarchy to a democratic republic overnight with virtually no cultural/educational foundation or preparation, leading to governance failures and later communist takeover (see below for a more detailed explanation of this)
AI labs trying to save the world by racing with each other
World better than it could be:
invention/propagation of the concept of naturalistic fallacy, tempering a lot of bad moral philosophies
moral/normative uncertainty and complexity of value being fairly well known, including among AI researchers, such that we rarely see proposals to imbue AI with the one true morality nowadays
<details> The Enlightenment’s Flawed Reasoning and its Negative Consequences (written by Gemini 2.5 Pro under my direction)
While often lauded, the Enlightenment shouldn’t automatically be classified as a triumph of “careful philosophical reasoning,” particularly concerning its foundational concept of “natural rights.” The core argument against its “carefulness” rests on several points:
Philosophically “Hand-Wavy” Concept of Natural Rights: The idea that rights are “natural,” “self-evident,” or inherent in a “state of nature” lacks rigorous philosophical grounding. Attempts to justify them relied on vague appeals to God, an ill-defined “Nature,” or intuition, rather than robust, universally compelling reasoning. It avoids the hard work of justifying why certain entitlements should exist and be protected, famously leading critics like Bentham to dismiss them as “nonsense upon stilts.”
Superficial Understanding Leading to Flawed Implementation: This lack of careful philosophical grounding wasn’t just an academic issue. It fostered a potentially superficial understanding of what rights are and what is required to make them real. Instead of seeing rights as complex, practical social and political achievements that require deep institutional infrastructure (rule of law, independent courts, enforcement mechanisms) and specific cultural norms (tolerance, civic virtue, respect for process), the “natural rights” framing could suggest they merely need to be declared or recognized to exist.
Case Study: China’s Premature Turn to Democracy: The negative consequences of this superficial understanding can be illustrated by the attempt to rapidly transition China from monarchy to a democratic republic in the early 20th century.
Influenced by Enlightenment ideals, reformers and revolutionaries like Sun Yat-sen adopted the forms of Western republicanism and rights-based governance.
However, the prevailing ideology, arguably built on this less-than-careful philosophy, underestimated the immense practical difficulty and the necessary prerequisites for such a system to function, especially in China’s context.
If Chinese intellectuals and leaders had instead operated from a more careful, practical philosophical understanding – viewing rights not as “natural” but as outcomes needing to be carefully constructed and secured through institutions and cultural development – they might have pursued different strategies.
Specifically, they might have favored gradualism, supporting constitutional reforms under the weakening Qing dynasty or working with reform-minded officials and strongmen like Yuan Shikai to build the necessary political and cultural infrastructure over time. This could have involved strengthening proto-parliamentary bodies, legal systems, and civic education incrementally.
Instead, the revolutionary fervor, fueled in part by the appealing but ultimately less “careful” ideology of inherent rights and immediate republicanism, pushed for a radical break. This premature adoption of democratic forms without the functional substance contributed significantly to the collapse of central authority, the chaos of the Warlord Era, and ultimately created conditions ripe for the rise of the Communist Party, leading the country down a very different and tragic path.
In Conclusion: This perspective argues that the Enlightenment, despite its positive contributions, contained significant philosophical weaknesses, particularly in its conception of rights. This lack of “carefulness” wasn’t benign; it fostered an incomplete understanding that, when adopted by influential actors facing complex political realities like those in early 20th-century China, contributed to disastrous strategic choices and ultimately made the world worse than it might have been had a more pragmatically grounded philosophy prevailed. It underscores how the quality and depth of philosophical reasoning can have profound real-world consequences. </details>
This seems fair, and I guess from this perspective my response is that I’m not sure how to intervene on the other factors (aside from enhancing human IQ, which I do support). It seems like your view is that emotional temperament is also a good place to intervene? If so, perhaps I should read your posts with this in mind. (I previously didn’t see how the Replacing Fear sequence was relevant to my concerns, and mostly skipped it.)
I’m actually reluctant to export my ideas to more people, especially those who don’t care as much about careful reasoning (which unfortunately is almost everyone), as I don’t want to be responsible for people misusing my ideas, e.g., overconfidently putting them into practice or extending them in wrong directions.
However I’m trying to practice some skills related to exporting ideas (such as talking to people in real time and participating on X) in case it does seem to be a good idea one day. Would be interested to hear more about what other secret sauces related to this I might be missing. (I guess public speaking is another one, but the cost of practicing that one is too high for me.)
To be clear, I think this is totally fine, as long as you take care to not be or appear too confident about these half-formed ideas, and take precautions against other people taking your ideas more seriously than they should (such as by monitoring subsequent discussions and weighing in against other people’s over-enthusiasm). I think “careful thinking” can and should be a social activity, which would necessitate communicating half-formed ideas during the collaborative process. I’ve done this myself plenty of times, such as in my initial UDT post, which was very informal and failed to anticipate many subsequently discovered problems, so I’m rather surprised that you think I would be against this.