I don’t say it’s not risky. The question is more, what’s the difference between doing philosophy and other intellectual tasks.
Here’s one way to look at it that just occurred to me. In domains with feedback, like science or just doing real world stuff in general, we learn some heuristics. Then we try to apply these heuristics to the stuff of our mind, and sometimes it works but more often it fails. And then doing good philosophy means having a good set of heuristics from outside of philosophy, and good instincts when to apply them or not. And some luck, in that some heuristics will happen to generalize to the stuff of our mind, but others won’t.
If this is a true picture, then running far ahead with philosophy is just inherently risky. The further you step away from heuristics that have been tested in reality, and their area of applicability, the bigger your error will be.
Do you have any examples that could illustrate your theory?
It doesn’t seem to fit my own experience. I became interested in Bayesian probability, universal prior, Tegmark multiverse, and anthropic reasoning during college, and started thinking about decision theory and ideas that ultimately led to UDT, but what heuristics could I have been applying, learned from what “domains with feedback”?
Maybe I used a heuristic like “computer science is cool, lets try to apply it to philosophical problems” but if the heuristics are this coarse grained, it doesn’t seem like the idea can explain how detailed philosophical reasoning happens, or be used to ensure AI philosophical competence?
Maybe one example is the idea of Dutch book. It comes originally from real world situations (sport betting and so on) and then we apply it to rationality in the abstract.
Or another example, much older, is how Socrates used analogy. It was one of his favorite tools I think. When talking about some confusing thing, he’d draw an analogy with something closer to experience. For example, “Is the nature of virtue different for men and for women?”—“Well, the nature of strength isn’t that much different between men and women, likewise the nature of health, so maybe virtue works the same way.” Obviously this way of reasoning can easily go wrong, but I think it’s also pretty indicative of how people do philosophy.
I don’t say it’s not risky. The question is more, what’s the difference between doing philosophy and other intellectual tasks.
Here’s one way to look at it that just occurred to me. In domains with feedback, like science or just doing real world stuff in general, we learn some heuristics. Then we try to apply these heuristics to the stuff of our mind, and sometimes it works but more often it fails. And then doing good philosophy means having a good set of heuristics from outside of philosophy, and good instincts when to apply them or not. And some luck, in that some heuristics will happen to generalize to the stuff of our mind, but others won’t.
If this is a true picture, then running far ahead with philosophy is just inherently risky. The further you step away from heuristics that have been tested in reality, and their area of applicability, the bigger your error will be.
Does this make sense?
Do you have any examples that could illustrate your theory?
It doesn’t seem to fit my own experience. I became interested in Bayesian probability, universal prior, Tegmark multiverse, and anthropic reasoning during college, and started thinking about decision theory and ideas that ultimately led to UDT, but what heuristics could I have been applying, learned from what “domains with feedback”?
Maybe I used a heuristic like “computer science is cool, lets try to apply it to philosophical problems” but if the heuristics are this coarse grained, it doesn’t seem like the idea can explain how detailed philosophical reasoning happens, or be used to ensure AI philosophical competence?
Maybe one example is the idea of Dutch book. It comes originally from real world situations (sport betting and so on) and then we apply it to rationality in the abstract.
Or another example, much older, is how Socrates used analogy. It was one of his favorite tools I think. When talking about some confusing thing, he’d draw an analogy with something closer to experience. For example, “Is the nature of virtue different for men and for women?”—“Well, the nature of strength isn’t that much different between men and women, likewise the nature of health, so maybe virtue works the same way.” Obviously this way of reasoning can easily go wrong, but I think it’s also pretty indicative of how people do philosophy.