Yeah, it seems like a potential avenue of philosophical progress. Do you have ideas for what specific questions could be answered this way and what specific methods might be used?
Frequently, once we start being able to do science on a subject that was formerly only considered by philosophers, the hypotheses that Science ends up confirming are ones that philosophers’ hadn’t even suggested or considered, even in centuries of discussion. (Atomism is admittedly a counterexample, but they’ve become rarer, and the Feynman functional-integral over histories is the sort of “fundamental nature of reality” that any philosopher who had suggested it before the 20th century would have been asked what he or she had been smoking. “So you’re suggesting every possible and impossible thing happens, equally, but the vast majority of them almost cancel out, and of the things that don’t, roughly speaking all the possible things, we only perceive one of them, which is actually a tree that branches, but we just get to see a single branch, changing direction seemingly randomly — who ordered that?”)
Which is a rather long-winded way of saying “I have no idea! History suggests it’s going to be something stranger than I can currently imagine.”
Which is why I’m a lot more optimistic about the plan “Have ASI extend Science to find the scientific answer to questions that were formerly philosophical” than about the plan “build ASI philosophers, and hope that (in defiance of the history of Philosophy) they converge to a single answer, rather than enumerating an ever-larger number of possible answers”. For a specific proposal along these lines, see Grounding Value Learning in Evolutionary Psychology: an Alternative Proposal to CEV — basically, I’m suggesting we use the biological rather than philosophical study of ethics for AI alignment purposes. An AGI that can’t do Biology at least as well as us clearly isn’t an AGI.
Yeah, it seems like a potential avenue of philosophical progress. Do you have ideas for what specific questions could be answered this way and what specific methods might be used?
Frequently, once we start being able to do science on a subject that was formerly only considered by philosophers, the hypotheses that Science ends up confirming are ones that philosophers’ hadn’t even suggested or considered, even in centuries of discussion. (Atomism is admittedly a counterexample, but they’ve become rarer, and the Feynman functional-integral over histories is the sort of “fundamental nature of reality” that any philosopher who had suggested it before the 20th century would have been asked what he or she had been smoking. “So you’re suggesting every possible and impossible thing happens, equally, but the vast majority of them almost cancel out, and of the things that don’t, roughly speaking all the possible things, we only perceive one of them, which is actually a tree that branches, but we just get to see a single branch, changing direction seemingly randomly — who ordered that?”)
Which is a rather long-winded way of saying “I have no idea! History suggests it’s going to be something stranger than I can currently imagine.”
Which is why I’m a lot more optimistic about the plan “Have ASI extend Science to find the scientific answer to questions that were formerly philosophical” than about the plan “build ASI philosophers, and hope that (in defiance of the history of Philosophy) they converge to a single answer, rather than enumerating an ever-larger number of possible answers”. For a specific proposal along these lines, see Grounding Value Learning in Evolutionary Psychology: an Alternative Proposal to CEV — basically, I’m suggesting we use the biological rather than philosophical study of ethics for AI alignment purposes. An AGI that can’t do Biology at least as well as us clearly isn’t an AGI.