If a good rationalist could predict with reasonably high probability what methods good rationalists would use in 50-100 years, wouldn’t said rationalist immediately update to use those methods now, invalidating his own prediction?
Well, one could pick specific issues that one things we’ll understood more. For example, we might have better understanding of certain cognitive biases, or may have better tactics for dealing with them. This is similar to how someone in 1955 could have made predictions about space travel even if they couldn’t design a fully functioning spacecraft.
Not if those options are currently too computationally difficult to run. For instance, I’m currently considering the prediction “In the future, good rationalists will use today’s rational methods of thinking, but they will use them faster and with more automation and computer assistance.”
To give an example, imagine if a person currently posting on Less Wrong was much older, and was still posting about rationality. And that person had a little helper script that would interject into an argument you were going to make with “Is this part here an appeal to emotion?”
You could retranslate that into the concept “Thoroughly recheck all of your arguments to make sure you aren’t making basic mistakes.” and suggest that right now. It’s good advice. I try to do it, but I don’t do it enough, and I still miss things. I think AnnaSalamon pointed out that one thing she noticed from work writing that rationality curriculum is that she noticed she was doing that more often. So it’s certainly an improvable skill.
But right now, (or even if that planned rationality curriculum works brilliantly) a rationalist would still have to reread posts or review thoughts and find those manually. It seems like this could be automated in the future, for at least some types of basic mistakes. I would not at all be surprised if some mistakes were harder to find than others. So in addition to spell check, and grammar check, in the future we might have fallacy check and/or bias check, with the same types of caveats and flaws that those automated checkers had had during their development.
Now that I’ve actually laid out the prediction, I do find it compelling, but that might just be because I can’t see any obvious flaws in the passes that I made to recheck it, and there is a limited amount of time I have to review it before either the idea seems stale or I want to move on, or I feel like I have checked it enough and I haven’t seen anything so I’m fairly confident it’s accuracy would be too difficult to improve.
Edit: Corrected spelling. (After mentioning spell checkers and their caveats and flaws in my post, one of which I have just been reminded of is that they don’t fix usernames.)
Having a few very good rationalists applying “fallacy check” and “bias check” to all their own essays would be wonderful… but just imagine the implications of having many mediocre rationalists regularly applying “fallacy check” and “bias check” to their politicians essays and speeches.
I’d love to see what kind of feedback that provides to the politicians speechwriters. “Well, sir, we could say that, and it could give us a nice brief popularity boost, but would that be worth the blowback we get once everybody’s talking about how we sent their fallacy-meters off the charts?”
but just imagine the implications of having many mediocre rationalists regularly applying “fallacy check” and “bias check” to their politicians essays and speeches.
Their ability to do this without getting mind-killed is very much open to question.
If a good rationalist could predict with reasonably high probability what methods good rationalists would use in 50-100 years, wouldn’t said rationalist immediately update to use those methods now, invalidating his own prediction?
Well, one could pick specific issues that one things we’ll understood more. For example, we might have better understanding of certain cognitive biases, or may have better tactics for dealing with them. This is similar to how someone in 1955 could have made predictions about space travel even if they couldn’t design a fully functioning spacecraft.
Not if those options are currently too computationally difficult to run. For instance, I’m currently considering the prediction “In the future, good rationalists will use today’s rational methods of thinking, but they will use them faster and with more automation and computer assistance.”
To give an example, imagine if a person currently posting on Less Wrong was much older, and was still posting about rationality. And that person had a little helper script that would interject into an argument you were going to make with “Is this part here an appeal to emotion?”
You could retranslate that into the concept “Thoroughly recheck all of your arguments to make sure you aren’t making basic mistakes.” and suggest that right now. It’s good advice. I try to do it, but I don’t do it enough, and I still miss things. I think AnnaSalamon pointed out that one thing she noticed from work writing that rationality curriculum is that she noticed she was doing that more often. So it’s certainly an improvable skill.
But right now, (or even if that planned rationality curriculum works brilliantly) a rationalist would still have to reread posts or review thoughts and find those manually. It seems like this could be automated in the future, for at least some types of basic mistakes. I would not at all be surprised if some mistakes were harder to find than others. So in addition to spell check, and grammar check, in the future we might have fallacy check and/or bias check, with the same types of caveats and flaws that those automated checkers had had during their development.
Now that I’ve actually laid out the prediction, I do find it compelling, but that might just be because I can’t see any obvious flaws in the passes that I made to recheck it, and there is a limited amount of time I have to review it before either the idea seems stale or I want to move on, or I feel like I have checked it enough and I haven’t seen anything so I’m fairly confident it’s accuracy would be too difficult to improve.
Edit: Corrected spelling. (After mentioning spell checkers and their caveats and flaws in my post, one of which I have just been reminded of is that they don’t fix usernames.)
Having a few very good rationalists applying “fallacy check” and “bias check” to all their own essays would be wonderful… but just imagine the implications of having many mediocre rationalists regularly applying “fallacy check” and “bias check” to their politicians essays and speeches.
I’d love to see what kind of feedback that provides to the politicians speechwriters. “Well, sir, we could say that, and it could give us a nice brief popularity boost, but would that be worth the blowback we get once everybody’s talking about how we sent their fallacy-meters off the charts?”
Their ability to do this without getting mind-killed is very much open to question.