What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems? That for example the procedure our brains are using to try to solve decision theory is essentially the same as the one we’ll use to solve consciousness? (I mean some sort of procedure that we can isolate and not just the human mind as a whole.)
If there isn’t such a single procedure, I just don’t see how we can possibly solve all of the necessarily philosophical problems to build an FAI before someone builds an AGI, because we are still at the stage where every step forward we make just lets us see how many more problems there are (see Open Problems Related to Solomonoff Induction for example) and we are making forward steps so slowly, and worse, there’s no good way of verifying that each step we take really is a step forward and not some erroneous digression.
What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems?
Very low, of course. (Then again, relative to the perspective of nonscientists, there turned out to be a single procedure that could be used to solve all empirical problems.) But in general, problems always look much more complicated than solutions do; the presence of a host of confusions does not indicate that the set of deep truths underlying all the solutions is noncompact.
Do you think it’s reasonable to estimate the amount of philosophical confusion we will have at some given time in the future by looking at the amount of philosophical confusions we currently face, and compared that to the rate at which we are clearing them up minus the rate at which new confusions are popping up? If so, how much of your relative optimism is accounted for by your work on meta-ethics? (Recall that we have a disagreement over how much progress that work represents.) Do you think my pessimism would be reasonable if we assume for the sake of argument that that work does not actually represent much progress?
some single procedure that can be used to solve all philosophical problems
This is why I keep mentioning transcendental phenomenology. It is for philosophy what string theory is for physics, a strong candidate for the final answer. It’s epistemologically deeper than natural science or mathematics, which it treats as specialized forms of rational subjective activity. But it’s a difficult subject, which is why I mention it more often than I explain it. To truly teach it, I’d first need to understand, reproduce, and verify all its claims and procedures for myself, which I have not done. But I’ve seen enough to be impressed. Regardless of whether it is the final answer philosophically, I guarantee that mastering its concepts and terminology is a goal that would take a person philosophically deeper than anything else I could recommend.
What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems? That for example the procedure our brains are using to try to solve decision theory is essentially the same as the one we’ll use to solve consciousness? (I mean some sort of procedure that we can isolate and not just the human mind as a whole.)
If there isn’t such a single procedure, I just don’t see how we can possibly solve all of the necessarily philosophical problems to build an FAI before someone builds an AGI, because we are still at the stage where every step forward we make just lets us see how many more problems there are (see Open Problems Related to Solomonoff Induction for example) and we are making forward steps so slowly, and worse, there’s no good way of verifying that each step we take really is a step forward and not some erroneous digression.
Very low, of course. (Then again, relative to the perspective of nonscientists, there turned out to be a single procedure that could be used to solve all empirical problems.) But in general, problems always look much more complicated than solutions do; the presence of a host of confusions does not indicate that the set of deep truths underlying all the solutions is noncompact.
Do you think it’s reasonable to estimate the amount of philosophical confusion we will have at some given time in the future by looking at the amount of philosophical confusions we currently face, and compared that to the rate at which we are clearing them up minus the rate at which new confusions are popping up? If so, how much of your relative optimism is accounted for by your work on meta-ethics? (Recall that we have a disagreement over how much progress that work represents.) Do you think my pessimism would be reasonable if we assume for the sake of argument that that work does not actually represent much progress?
This is why I keep mentioning transcendental phenomenology. It is for philosophy what string theory is for physics, a strong candidate for the final answer. It’s epistemologically deeper than natural science or mathematics, which it treats as specialized forms of rational subjective activity. But it’s a difficult subject, which is why I mention it more often than I explain it. To truly teach it, I’d first need to understand, reproduce, and verify all its claims and procedures for myself, which I have not done. But I’ve seen enough to be impressed. Regardless of whether it is the final answer philosophically, I guarantee that mastering its concepts and terminology is a goal that would take a person philosophically deeper than anything else I could recommend.