The way I see this, among the problems once considered philosophical, there are some subsets that turned out to be much easier than others, and which are no longer considered part of philosophy. These are generally problems where a proposed solution can be straightforwardly verified, for example by checking a mathematical proof, or through experimental testing.
Given that the philosophical problems involved in designing FAI do not seem to fall into these subsets, it doesn’t obviously make sense to include “problems once considered philosophical” in the reference class for the purposes I described in the OP, but maybe I should give this some more thought. To be clear, are you actually making this suggestion?
It seems to me that we can’t — in the general case — tell in advance which problems will turn out to be easier and which harder. If it had turned out that the brain wasn’t the engine of reasoning, but merely a conduit for the soul, then cognitive science would be even harder than it actually is.
The way I see this, among the problems once considered philosophical, there are some subsets that turned out to be much easier than others, and which are no longer considered part of philosophy. These are generally problems where a proposed solution can be straightforwardly verified, for example by checking a mathematical proof, or through experimental testing.
Given that the philosophical problems involved in designing FAI do not seem to fall into these subsets, it doesn’t obviously make sense to include “problems once considered philosophical” in the reference class for the purposes I described in the OP, but maybe I should give this some more thought. To be clear, are you actually making this suggestion?
It seems to me that we can’t — in the general case — tell in advance which problems will turn out to be easier and which harder. If it had turned out that the brain wasn’t the engine of reasoning, but merely a conduit for the soul, then cognitive science would be even harder than it actually is.