I think the most important thing about a rationality training service is operationalizing what is meant by rationality.
What exact services would the rationality training service provide? Would students have beliefs that match reality better? Be less prone to cognitive biases? Tend to make decisions that promote greater utility (for themselves or others)? How would you test this? Martial arts dojos tend to (putting it crudely) make their students better at hitting things than they were before; that’s a lot easier to objectively measure than making students better at thinking than they were before.
I personally would not pay for a rationality training service unless it provided clear, non-anecdotal evidence that the average person received some benefit. I’d be particularly concerned about whether the service actually taught people to think more clearly, or simply inculcated them with the views of the people running the service.
Instrumental rationality is the focus we have in mind—doing the things that most enhance your personal utility. Avoiding cognitive biases and having beliefs that match reality better are means to better instrumental rationality, but not the end. Some of the things that I think would fall under instrumental rationality would be better decisions (the ones important enough to merit some analyzing), determining what habits would be good to acquire or discard, and overcoming akrasia. I think we would have to start highly focused on one of these areas and a specific target market, and branch out over time.
As to how to test benefit of the training… I’ve put that on my list of questions to consider. I don’t know the answer right now. But anything that has an observable effect of some sort will be measurable in some fashion.
BTW, just discussing things on LW makes me a more careful thinker. I originally wrote, “As to how to demonstrate benefit of the training...”, and then I realized the bias in that word “demonstrate”—it presupposes a particular conclusion in advance!
To a certain degree one could test instrumental rationality indirectly. Perhaps have them set a goal they haven’t made much progress on (dieting? writing a novel? reducing existential risk?) and see if instrumental rationality training leads to more progress on the goal. Or give people happiness tests before and a year after completing the training (i.e. when enough time has passed that the hedonic treadmill has had time to work). Admittedly, these indirect methods are incredibly prone to confounding variables, but if averaged over a large enough sample size the trend should be clear.
I presume that “I realized this goal was irrational and switched to a different goal that would better achieve my values” would also be a victory for instrumental rationality...
I think the most important thing about a rationality training service is operationalizing what is meant by rationality.
What exact services would the rationality training service provide? Would students have beliefs that match reality better? Be less prone to cognitive biases? Tend to make decisions that promote greater utility (for themselves or others)? How would you test this? Martial arts dojos tend to (putting it crudely) make their students better at hitting things than they were before; that’s a lot easier to objectively measure than making students better at thinking than they were before.
I personally would not pay for a rationality training service unless it provided clear, non-anecdotal evidence that the average person received some benefit. I’d be particularly concerned about whether the service actually taught people to think more clearly, or simply inculcated them with the views of the people running the service.
Instrumental rationality is the focus we have in mind—doing the things that most enhance your personal utility. Avoiding cognitive biases and having beliefs that match reality better are means to better instrumental rationality, but not the end. Some of the things that I think would fall under instrumental rationality would be better decisions (the ones important enough to merit some analyzing), determining what habits would be good to acquire or discard, and overcoming akrasia. I think we would have to start highly focused on one of these areas and a specific target market, and branch out over time.
As to how to test benefit of the training… I’ve put that on my list of questions to consider. I don’t know the answer right now. But anything that has an observable effect of some sort will be measurable in some fashion.
BTW, just discussing things on LW makes me a more careful thinker. I originally wrote, “As to how to demonstrate benefit of the training...”, and then I realized the bias in that word “demonstrate”—it presupposes a particular conclusion in advance!
To a certain degree one could test instrumental rationality indirectly. Perhaps have them set a goal they haven’t made much progress on (dieting? writing a novel? reducing existential risk?) and see if instrumental rationality training leads to more progress on the goal. Or give people happiness tests before and a year after completing the training (i.e. when enough time has passed that the hedonic treadmill has had time to work). Admittedly, these indirect methods are incredibly prone to confounding variables, but if averaged over a large enough sample size the trend should be clear.
Something to think about if you have a goal of losing weight. How do you decide whether a goal makes sense?
Interesting article!
I presume that “I realized this goal was irrational and switched to a different goal that would better achieve my values” would also be a victory for instrumental rationality...