One of the questions this article asks is “How can Rationality and the people that want to learn about it avoid Epistemic Viciousness?” I feel as though many take martial arts because of a desire to learn to defend themselves and feel prepared for violence, and dojos are all too happy to sell that without giving any real knowledge of self-defense. Even on this page’s comments the idea that certain schools have more utility because they were tested in a “realistic” environment is bandied about like that will help you not get mugged.
I feel that one of the best utilities of Rationality is self-optimization, but that isn’t what drew me to LW (TvTropes link about the AI box experiment). Rationalists can avoid epistemic viciousness by not being afraid to explore both how rationality can improve our lives as well as where rationality just doesn’t have enough utility to justify the expense of learning. We can be better than MA by not selling rationality to people who want to use it for some low utility (like winning arguments against theists). Why would the layperson explore rationality? Or, if we want to concentrate on the LW demographic, what do LW’ers expect out of listening to Eliezer Yudowsky’s blogs? Though self-optimization is one of the higher-utility benefits of rationality, I’ve stuck around because I’m fascinated by this “save the world” idea, not because I plan to dedicate myself to undergoing a Bayesian Enlightenment.
One of the questions this article asks is “How can Rationality and the people that want to learn about it avoid Epistemic Viciousness?” I feel as though many take martial arts because of a desire to learn to defend themselves and feel prepared for violence, and dojos are all too happy to sell that without giving any real knowledge of self-defense. Even on this page’s comments the idea that certain schools have more utility because they were tested in a “realistic” environment is bandied about like that will help you not get mugged.
I feel that one of the best utilities of Rationality is self-optimization, but that isn’t what drew me to LW (TvTropes link about the AI box experiment). Rationalists can avoid epistemic viciousness by not being afraid to explore both how rationality can improve our lives as well as where rationality just doesn’t have enough utility to justify the expense of learning. We can be better than MA by not selling rationality to people who want to use it for some low utility (like winning arguments against theists). Why would the layperson explore rationality? Or, if we want to concentrate on the LW demographic, what do LW’ers expect out of listening to Eliezer Yudowsky’s blogs? Though self-optimization is one of the higher-utility benefits of rationality, I’ve stuck around because I’m fascinated by this “save the world” idea, not because I plan to dedicate myself to undergoing a Bayesian Enlightenment.