Similarly, it might be true that while there is a great mass of irrationality out there, cognitive labor, like any other labor, can be specialized- and so focusing your rationality training on people who specialize in thinking makes sense just as focusing your movement training on people who specialize in movement makes sense. (Here I’m including speaking as movement for reasons that are anatomically obvious.)
This would imply that CFAR should be pitching its workshops to academics and government policymakers. Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world. Actual scientific research and public policy-making are (or, at least, scientific research is fairly intensive cognitive labor… I wouldn’t necessary say it has a high mean impact on any per-unit basis).
Why? It seems to me that training people to think well is better, because if they end up disagreeing that gives you valuable information to update on.
I would hope so! But what information indicates CFAR does this?
But supposing your model is correct—that we a broad rationality education would do the most good—I seem to recall hearing about an undergraduate-level rationality curriculum being developed by Keith Stanovich, a CFAR advisor, and I suspect Anna or others may know more details. Once we’ve got an undergraduate curriculum being taught, that should teach us enough to develop high-school level curriculum, and so on down to songs that can be sung in kindergarten.
That’s good, but I worry that it doesn’t go far enough. The issue is not that we’re failing to teach probability theory to kindergartners—they don’t need it and don’t want it. The issue is that our society allows people to walk around thinking that there isn’t actually an external world to which their actions will be held accountable at all, and that subjective feeling both governs reality and normatively dictates correct actions.
To make an offensive political quip: there is the assertion-based community, and the reality-based community; too many people belong to the former and not nearly enough to the latter. The biggest impact we can have on “raising the sanity waterline” is to move people from the group who believe in a Fideist Theory of Truth (“Things are true by virtue of how I feel about them”) to people who believe in the Correspondence Theory of Truth (“Things are true when they match the world outside my head!”), which also thus inspires people to listen to educated domain experts at all.
To give a flagrantly stupid example, we really really really don’t want society’s way of dealing with the Friendly AI problem determined by people who believe that AIs have souls and would never harm anyone because they don’t have original sin. Giving Silicon Valley executives effectiveness workshops will not avert this problem, while teaching the broad public the very basic worldview that the universe is lawful, rather than consciously optimizing for recognizably humanoid goals, is likely to affect this problem.
This would imply that CFAR should be pitching its workshops to academics and government policymakers.
My understanding is that CFAR is attended by both present and likely future academics; I don’t know about government policymakers. (I’ve met people on national advisory boards from at least two countries at CFAR workshops, but I don’t pretend to know how much influence they have on those boards, or how much influence those boards have on actual policy.)
Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world.
At time of writing this comment, there are 14 startups listed in the post. What number of them would you consider local-mobile-social apps? (This seems to be an example of “not to be X” signifying “I am aware this is being an X but would like to avoid paying the relevant penalty.”)
I would hope so! But what information indicates CFAR does this?
I have always gotten the impression from them that they want to be as cause agnostic as is reasonable, but I can’t speak to their probability estimates over time and thus how they’ve updated.
The biggest impact we can have on “raising the sanity waterline” is to move people from the group who believe in a Fideist Theory of Truth (“Things are true by virtue of how I feel about them”) to people who believe in the Correspondence Theory of Truth (“Things are true when they match the world outside my head!”), which also thus inspires people to listen to educated domain experts at all.
Are there people working on a reproducible system to help people make this move? It’s not at all obvious to me that this would be the comparative advantage of the people at CFAR. (Though it seems to me that much of the CFAR material is helping people finish making that transition, or, at least, get further along it.)
This would imply that CFAR should be pitching its workshops to academics and government policymakers. Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world. Actual scientific research and public policy-making are (or, at least, scientific research is fairly intensive cognitive labor… I wouldn’t necessary say it has a high mean impact on any per-unit basis).
I would hope so! But what information indicates CFAR does this?
That’s good, but I worry that it doesn’t go far enough. The issue is not that we’re failing to teach probability theory to kindergartners—they don’t need it and don’t want it. The issue is that our society allows people to walk around thinking that there isn’t actually an external world to which their actions will be held accountable at all, and that subjective feeling both governs reality and normatively dictates correct actions.
To make an offensive political quip: there is the assertion-based community, and the reality-based community; too many people belong to the former and not nearly enough to the latter. The biggest impact we can have on “raising the sanity waterline” is to move people from the group who believe in a Fideist Theory of Truth (“Things are true by virtue of how I feel about them”) to people who believe in the Correspondence Theory of Truth (“Things are true when they match the world outside my head!”), which also thus inspires people to listen to educated domain experts at all.
To give a flagrantly stupid example, we really really really don’t want society’s way of dealing with the Friendly AI problem determined by people who believe that AIs have souls and would never harm anyone because they don’t have original sin. Giving Silicon Valley executives effectiveness workshops will not avert this problem, while teaching the broad public the very basic worldview that the universe is lawful, rather than consciously optimizing for recognizably humanoid goals, is likely to affect this problem.
My understanding is that CFAR is attended by both present and likely future academics; I don’t know about government policymakers. (I’ve met people on national advisory boards from at least two countries at CFAR workshops, but I don’t pretend to know how much influence they have on those boards, or how much influence those boards have on actual policy.)
At time of writing this comment, there are 14 startups listed in the post. What number of them would you consider local-mobile-social apps? (This seems to be an example of “not to be X” signifying “I am aware this is being an X but would like to avoid paying the relevant penalty.”)
I have always gotten the impression from them that they want to be as cause agnostic as is reasonable, but I can’t speak to their probability estimates over time and thus how they’ve updated.
Are there people working on a reproducible system to help people make this move? It’s not at all obvious to me that this would be the comparative advantage of the people at CFAR. (Though it seems to me that much of the CFAR material is helping people finish making that transition, or, at least, get further along it.)