I agree I’ve been using “metaphilosophical competence” to refer to some combination of both rationality and philosophical competence. I have an implicit intuition that rationality, philosophical competence, and metaphilosophical competence all sort of blur into each other, such that being sufficient in any one of them makes you sufficient in all of them. I agree this is not obvious and probably confusing.
To elaborate: sufficient metaphilosophical competence should imply broad philosophical competence, and since metaphilosophy is a kind of philosophy, sufficient philosophical competence should imply sufficient metaphilosophical competence. Sufficient philosophical competence would allow you to figure out what it means to act rationally, and cause you to act rationally.
That rationality implies philosophical competence seems the least obvious. I suppose I think of philosophical competence as some combination of not being confused by words, and normal scientific competence—that is, given a bunch of data, figuring out which data is noise and which hypotheses fit the non-noisy data. Philosophy is just a special case where the data is our intuitions about what concepts should mean, the hypotheses are criteria/definitions that capture these intuitions, and the datapoints happen to be extremely sparse and noisy. Some examples:
Section 1.1 in the logical induction paper lists a bunch of desiderata (“datapoints”) for what logical uncertainty is. The logical induction criterion is a criterion (“hypothesis”) that fits a majority of those datapoints.
The Von Neumann–Morgenstern utility theorem starts with a bunch of desiderata (“datapoints”) for rational behavior, and expected utility maximization is a criterion (“hypothesis”) that fits these datapoints.
I think both utilitarianism and deontology are moral theories (“hypotheses”) that fit a good chunk of our moral intuitions (“datapoints”). I also think both leave much to be desired.
Philosophical progress seems objective and real like scientific progress—progress is made when a parsimonious new theory fits the data much better. One important way in which philosophical progress differs from scientific progress is that there’s much less consensus on what the data is or whether a theory fits it better, but I think this is mostly a function of most people being extremely philosophically confused, rather than e.g. philosophy being inherently subjective. (The “not being confused by words” component I identified mostly corresponds to the skill of identifying which datapoints we should consider in the first place, which of the datapoints are noise, and what it means for a theory to fit the data.)
Relatedly, I think it is not a coincidence that the Sequences, which are primarily about rationality, also managed to deftly resolve a number of common philosophical confusions (e.g. MWI vs Copenhagen, free will, p-zombies).
I also suspect that a sufficiently rational AGI would simply not get confused by philosophy the way humans do, and that it would feel to it from the inside like a variant of science. For example, it’s hard for me to imagine it tying itself up in knots trying to reason about theology. (I sometimes think about confusing philosophical arguments as adversarial examples for human reasoning...)
Anyway, I agree this was all unclear and non-obvious (and plausibly wrong), and I’m happy to hear any suggestions for better descriptors. I literally went with “rationality” before “metaphilosophical competence”, but people complained that was overloaded and confusing...
I have an implicit intuition that rationality, philosophical competence, and metaphilosophical competence all sort of blur into each other, such that being sufficient in any one of them makes you sufficient in all of them.
I think this is plausible but I’m not very convinced by your arguments. Maybe we can have a discussion about it at a later date. I haven’t been able to come up with a better term for a combination of all three that didn’t sound awkward, so unless someone else has a good suggestion, perhaps you could just put some explanations at the top of your posts or when you first use the term in a post, something like “by ‘metaphilosophical competence’ I mean to also include philosophical competence and rationality.”
To respond to the substance of your argument that being sufficient in any of rationality, philosophical competence, and metaphilosophical competence makes you sufficient in all of them:
sufficient metaphilosophical competence should imply broad philosophical competence
You could discover an algorithm for doing philosophy (implying great metaphilosophical competence) but not be able to execute it efficiently yourself.
since metaphilosophy is a kind of philosophy, sufficient philosophical competence should imply sufficient metaphilosophical competence
Philosophical competence could be a vector instead of a scalar, but I agree it’s more likely than not that sufficient philosophical competence implies sufficient metaphilosophical competence.
Sufficient philosophical competence would allow you to figure out what it means to act rationally, and cause you to act rationally.
I agree with the first part, but figuring out what rationality is does not imply being motivated to act rationally. (Imagine the The Blue-Minimizing Robot, plus a philosophy module connected to a speaker but not to anything else.)
Philosophy is just a special case where the data is our intuitions about what concepts should mean, the hypotheses are criteria/definitions that capture these intuitions, and the datapoints happen to be extremely sparse and noisy.
But where do those intuitions come from in the first place? Different people have different philosophically relevant intuitions, and having good intuitions seems to be an important part of philosophical competence, but is not implied (or at least not obviously implied) by rationality.
One important way in which philosophical progress differs from scientific progress is that there’s much less consensus on what the data is or whether a theory fits it better, but I think this is mostly a function of most people being extremely philosophically confused, rather than e.g. philosophy being inherently subjective
I would say that it is a function of philosophy being circular: there isn’t a set of foundations that everyone agrees on, so any theory can be challenged by challenging its assumptions. Philosophical questions tend to be precisely the kind of difficult foundational issues that get kicked into philosophy from other disciplines.
I agree I’ve been using “metaphilosophical competence” to refer to some combination of both rationality and philosophical competence. I have an implicit intuition that rationality, philosophical competence, and metaphilosophical competence all sort of blur into each other, such that being sufficient in any one of them makes you sufficient in all of them. I agree this is not obvious and probably confusing.
To elaborate: sufficient metaphilosophical competence should imply broad philosophical competence, and since metaphilosophy is a kind of philosophy, sufficient philosophical competence should imply sufficient metaphilosophical competence. Sufficient philosophical competence would allow you to figure out what it means to act rationally, and cause you to act rationally.
That rationality implies philosophical competence seems the least obvious. I suppose I think of philosophical competence as some combination of not being confused by words, and normal scientific competence—that is, given a bunch of data, figuring out which data is noise and which hypotheses fit the non-noisy data. Philosophy is just a special case where the data is our intuitions about what concepts should mean, the hypotheses are criteria/definitions that capture these intuitions, and the datapoints happen to be extremely sparse and noisy. Some examples:
Section 1.1 in the logical induction paper lists a bunch of desiderata (“datapoints”) for what logical uncertainty is. The logical induction criterion is a criterion (“hypothesis”) that fits a majority of those datapoints.
The Von Neumann–Morgenstern utility theorem starts with a bunch of desiderata (“datapoints”) for rational behavior, and expected utility maximization is a criterion (“hypothesis”) that fits these datapoints.
I think both utilitarianism and deontology are moral theories (“hypotheses”) that fit a good chunk of our moral intuitions (“datapoints”). I also think both leave much to be desired.
Philosophical progress seems objective and real like scientific progress—progress is made when a parsimonious new theory fits the data much better. One important way in which philosophical progress differs from scientific progress is that there’s much less consensus on what the data is or whether a theory fits it better, but I think this is mostly a function of most people being extremely philosophically confused, rather than e.g. philosophy being inherently subjective. (The “not being confused by words” component I identified mostly corresponds to the skill of identifying which datapoints we should consider in the first place, which of the datapoints are noise, and what it means for a theory to fit the data.)
Relatedly, I think it is not a coincidence that the Sequences, which are primarily about rationality, also managed to deftly resolve a number of common philosophical confusions (e.g. MWI vs Copenhagen, free will, p-zombies).
I also suspect that a sufficiently rational AGI would simply not get confused by philosophy the way humans do, and that it would feel to it from the inside like a variant of science. For example, it’s hard for me to imagine it tying itself up in knots trying to reason about theology. (I sometimes think about confusing philosophical arguments as adversarial examples for human reasoning...)
Anyway, I agree this was all unclear and non-obvious (and plausibly wrong), and I’m happy to hear any suggestions for better descriptors. I literally went with “rationality” before “metaphilosophical competence”, but people complained that was overloaded and confusing...
I think this is plausible but I’m not very convinced by your arguments. Maybe we can have a discussion about it at a later date. I haven’t been able to come up with a better term for a combination of all three that didn’t sound awkward, so unless someone else has a good suggestion, perhaps you could just put some explanations at the top of your posts or when you first use the term in a post, something like “by ‘metaphilosophical competence’ I mean to also include philosophical competence and rationality.”
To respond to the substance of your argument that being sufficient in any of rationality, philosophical competence, and metaphilosophical competence makes you sufficient in all of them:
You could discover an algorithm for doing philosophy (implying great metaphilosophical competence) but not be able to execute it efficiently yourself.
Philosophical competence could be a vector instead of a scalar, but I agree it’s more likely than not that sufficient philosophical competence implies sufficient metaphilosophical competence.
I agree with the first part, but figuring out what rationality is does not imply being motivated to act rationally. (Imagine the The Blue-Minimizing Robot, plus a philosophy module connected to a speaker but not to anything else.)
But where do those intuitions come from in the first place? Different people have different philosophically relevant intuitions, and having good intuitions seems to be an important part of philosophical competence, but is not implied (or at least not obviously implied) by rationality.
I would say that it is a function of philosophy being circular: there isn’t a set of foundations that everyone agrees on, so any theory can be challenged by challenging its assumptions. Philosophical questions tend to be precisely the kind of difficult foundational issues that get kicked into philosophy from other disciplines.