By writing a textbook. This is more important than anything.
Speaking as a former teacher: Full-time teachers don’t have time for anything, because they are overwhelmed by paperwork and behavioral problems of students. (This is rather sad, because in theory they are supposed to keep their knowledge up-to-date. They just don’t have a real opportunity to do so.) So if you want to make a teacher teach anything, even if they are willing to, you have to remove as much as possible of their time costs. By writing a textbook you save them time for research and planning. A perfect solution would be a “plug and play” textbook with lessons and exercises, divided into 45-minute blocks.
Once you have such textbook, it can be used outside of schools too.
Yes, this. When the exercises are ready and tested, next stage is to transform them into book form. Then we need to test the book—which parts of the text are most likely to be misunderstood, new exercises to smoothen the learning curve, tests for self-evaluation, etc.
A scary thought—is it really such a good idea to make a rationality textbook? You know, knowing about biases can hurt people. Even if rationality would be taught everywhere, it would not necessarily mean a global increase in rationality. People with motivated cognition would use the techniques to improve their discussion skills.
Actually, if a rationality textbook became popular, I would expect many religious groups to come with their own versions. Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
I don’t think a rationality textbook would make things worse. I just suspect that its positive effects could be easily neutralized. So even if such textbook helps people who really want to be rational, if we expect it to change the society in larger scale, we could be disappointed.
Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
There is no such thing as probablility 1, and, if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
By writing a textbook. This is more important than anything.
Speaking as a former teacher: Full-time teachers don’t have time for anything, because they are overwhelmed by paperwork and behavioral problems of students. (This is rather sad, because in theory they are supposed to keep their knowledge up-to-date. They just don’t have a real opportunity to do so.) So if you want to make a teacher teach anything, even if they are willing to, you have to remove as much as possible of their time costs. By writing a textbook you save them time for research and planning. A perfect solution would be a “plug and play” textbook with lessons and exercises, divided into 45-minute blocks.
Once you have such textbook, it can be used outside of schools too.
Sounds like a plan. How do we get started?
Possibly related to the Center for Modern Rationality’s ongoing attempt to formulate good rationality exercises. (Good bit of overlap there, I think.)
Yes, this. When the exercises are ready and tested, next stage is to transform them into book form. Then we need to test the book—which parts of the text are most likely to be misunderstood, new exercises to smoothen the learning curve, tests for self-evaluation, etc.
A scary thought—is it really such a good idea to make a rationality textbook? You know, knowing about biases can hurt people. Even if rationality would be taught everywhere, it would not necessarily mean a global increase in rationality. People with motivated cognition would use the techniques to improve their discussion skills.
Actually, if a rationality textbook became popular, I would expect many religious groups to come with their own versions. Essentially, all you have to do is choose a different prior: one that gives probability 1 to your sacred teachings; then you can go on and be a good Bayesian.
I don’t think a rationality textbook would make things worse. I just suspect that its positive effects could be easily neutralized. So even if such textbook helps people who really want to be rational, if we expect it to change the society in larger scale, we could be disappointed.
There is no such thing as probablility 1, and, if the students are taught to update priors according to emerging evidence, I can’t see that prior lasting very long.
There is no such thing as starting with a prior that does not contain probability 1, and achieving probability 1 by doing proper Bayesian updates. But I am speaking about something else: including this probability into one’s priors, as an act of faith.
If you start with probability 1, and do proper Bayesian updating, you end with probability 1. Of course unless you run into a direct contradiction and get a division-by-zero error. But that will never happen, because the contradiction will never be perfect—precisely because nothing can have probability 0, except if you put it into your priors. If a prior probability of something is 1, and you get an evidence which almost contradicts it, and there is only epsilon chance of explaining it by B (whatever horrible thing B is), proper Bayesian updating will just get you to believe B.
As an illustration, imagine a Tegmark multiverse. We are supposed to give each universe a prior probability according to Solomonoff induction. But suppose that we take only a subset of those universes, where some variant of the given faith if true. This subset is non-empty. There is a possible universe where a humanoid being called Yehovah is part of the laws of physics; it’s just an incredibly complex universe, so it has almost zero Solomonoff prior. But if you only take the selected subset of universes as your starting point (this is an arbitrary choice, but it is the only one you ever have to do), updating on any evidence will keep you inside this subset, because any evidence can be explained in some very small part of this subset.
To become rational, you need to be in a state of mind that allows you to develop towards rationality. By a proper act of motivated cognition you can lock yourself out. Some people think that such act (although they call it by a different name) is a right thing to do; fortunately, no one is able to do it perfectly.
sighs in relief