Thanks for this clear explanation of conceptual analysis. I’ve been wanting to ask some questions about this line of thought:
Where do semantic intuitions come from?
What should we do when different people have different such intuitions? For example you must know that Newcomb’s problem is famously divisive, with roughly half of philosophers preferring one-boxing and half preferring two-boxing. Similarly for trolley thought experiments, intuitions about the nature of morality (metaethics), etc.
How do we make sure that AI has the right intuitions? Maybe in some cases we can just have it learn from humans, but what about:
Cases where humans disagree.
Cases where all/most humans are wrong. (In other words, can we build AIs that have better intuitions than humans?) Or is that not a thing in conceptual analysis, i.e., semantic intuitions can’t be wrong?
Completely novel philosophical questions or situations where AI can’t learn from humans (because humans don’t have intuitions about it either, or AI has to make time sensitive decisions and humans are too slow).
I think concepts are probably similar to what artificial feedforward networks implement when they recognize objects. So a NN that recognizes chairs would implement the concept associated with the term “chair”. Such networks just output a value (yes/no, or something in between) when given certain, e.g. visual, inputs. Otherwise it’s a blackbox, there is no way to easily get the definition of “chair” out, even if it correctly identifies all and only chairs. And these “yes” or “no” values, when presented with specific examples as input, seem to be just what we receive from semantic intuitions. I know a chair when I see it.
Now for the practice philosophy, it is clear that we aren’t just able to apply concepts to real (e.g. sensory) data, but also to thought experiments, to hypothetical or counterfactual, in any case simulated, situations. It is not clear how this ability works in the brain, but we do have it.
When people have different intuitions in thought experiments, this could be due to several reasons:
One possibility is that the term in question is simply ambiguous. Does a tree falling in the forest make a sound when nobody is there? That presumably depends on the ambiguity of “sound”: The tree produces a sound wave, but no conscious sound experience. In such cases there is no real disagreement, just two concepts for one term.
Another possibility is that the term in question is vague. Do traffic lights have yellow or orange lights? Maybe “disagreements” here are just due to slightly different boundaries of concepts for different individuals, but there is no significant disagreement.
The last possibility is that the concepts in question are really approximately the same, and ambiguity or vagueness is not the issue. Those are typically the controversial cases. They are often called a paradox. My guess is that they are caused by some hidden complexity or ambiguity in the thought experiment or problem statement (rather than in an ambiguity of a central term) which pulls semantic intuitions in different directions. A paradox may be solved when the reasons for those contradicting intuitions are uncovered.
I actually think it is fairly rare for a paradox that some people simply have completely different intuitions. Most people can see both intuitions and are puzzled, since they (seem to) contradict each other.
In his original paper about Newcomb’s problem, I think Robert Nozick does a very good job at describing both intuitions such that both seem plausible. An example of what I imagine a solution could look like: The two-boxer answer is the right response to the question “What is the most useful decision in the given situation?”, while the one-boxer answer is the right response to the question “In the given situation, what is the decision according to the most useful general decision-making algorithm an agent could have?” Which would mean the intuitions apply to slightly different questions, even though the terms in question are not ambiguous themselves. The disagreement was semantic only insofar the problem is interpreted differently. (This is just an example of how one could, perhaps, explain the disagreement in this paradox consistent with the semantic theory, not a fleshed-out proposal.)
Ethics and so on seem similar. Generally, if a thought experiment produces very different outcomes for different people, the problem in the thought experiment my not be as clear as it seems. Maybe the problem needs clarification, or different, less unclear, thought experiments altogether.
I actually do think that semantic intuitions are infallible when they are certain. For example, if I imagine a prototypical (black) raven, and I mentally make it grey, I would still call it a raven. My semantic intuition here represents just a disposition to use the term associated with the concept. If someone then convinces me to call only black birds ravens, that wouldn’t be a counterexample to infallibility, that would just be me using a different concept than before for the same term. For paradoxical cases the intuitions are typically far less than certain, and that reflects their being provisional.
For AI to do philosophy, according to the conceptual analysis view, it needs some ability to do thought experiments, to do suppositional reasoning, and to apply its usual concepts to these virtual situations. It also needs some minimal amount of “creativity” to come up with provisional definitions or axiomatizations, and specific thought experiments. Overall, I don’t think AI would need to learn doing philosophy from humans. Either it can do it itself, possibly at a superhuman level, because it is general enough to have the necessary base abilities, or it can’t do it much at all.
Thanks for this clear explanation of conceptual analysis. I’ve been wanting to ask some questions about this line of thought:
Where do semantic intuitions come from?
What should we do when different people have different such intuitions? For example you must know that Newcomb’s problem is famously divisive, with roughly half of philosophers preferring one-boxing and half preferring two-boxing. Similarly for trolley thought experiments, intuitions about the nature of morality (metaethics), etc.
How do we make sure that AI has the right intuitions? Maybe in some cases we can just have it learn from humans, but what about:
Cases where humans disagree.
Cases where all/most humans are wrong. (In other words, can we build AIs that have better intuitions than humans?) Or is that not a thing in conceptual analysis, i.e., semantic intuitions can’t be wrong?
Completely novel philosophical questions or situations where AI can’t learn from humans (because humans don’t have intuitions about it either, or AI has to make time sensitive decisions and humans are too slow).
I think concepts are probably similar to what artificial feedforward networks implement when they recognize objects. So a NN that recognizes chairs would implement the concept associated with the term “chair”. Such networks just output a value (yes/no, or something in between) when given certain, e.g. visual, inputs. Otherwise it’s a blackbox, there is no way to easily get the definition of “chair” out, even if it correctly identifies all and only chairs. And these “yes” or “no” values, when presented with specific examples as input, seem to be just what we receive from semantic intuitions. I know a chair when I see it.
Now for the practice philosophy, it is clear that we aren’t just able to apply concepts to real (e.g. sensory) data, but also to thought experiments, to hypothetical or counterfactual, in any case simulated, situations. It is not clear how this ability works in the brain, but we do have it.
When people have different intuitions in thought experiments, this could be due to several reasons:
One possibility is that the term in question is simply ambiguous. Does a tree falling in the forest make a sound when nobody is there? That presumably depends on the ambiguity of “sound”: The tree produces a sound wave, but no conscious sound experience. In such cases there is no real disagreement, just two concepts for one term.
Another possibility is that the term in question is vague. Do traffic lights have yellow or orange lights? Maybe “disagreements” here are just due to slightly different boundaries of concepts for different individuals, but there is no significant disagreement.
The last possibility is that the concepts in question are really approximately the same, and ambiguity or vagueness is not the issue. Those are typically the controversial cases. They are often called a paradox. My guess is that they are caused by some hidden complexity or ambiguity in the thought experiment or problem statement (rather than in an ambiguity of a central term) which pulls semantic intuitions in different directions. A paradox may be solved when the reasons for those contradicting intuitions are uncovered.
I actually think it is fairly rare for a paradox that some people simply have completely different intuitions. Most people can see both intuitions and are puzzled, since they (seem to) contradict each other.
In his original paper about Newcomb’s problem, I think Robert Nozick does a very good job at describing both intuitions such that both seem plausible. An example of what I imagine a solution could look like: The two-boxer answer is the right response to the question “What is the most useful decision in the given situation?”, while the one-boxer answer is the right response to the question “In the given situation, what is the decision according to the most useful general decision-making algorithm an agent could have?” Which would mean the intuitions apply to slightly different questions, even though the terms in question are not ambiguous themselves. The disagreement was semantic only insofar the problem is interpreted differently. (This is just an example of how one could, perhaps, explain the disagreement in this paradox consistent with the semantic theory, not a fleshed-out proposal.)
Ethics and so on seem similar. Generally, if a thought experiment produces very different outcomes for different people, the problem in the thought experiment my not be as clear as it seems. Maybe the problem needs clarification, or different, less unclear, thought experiments altogether.
I actually do think that semantic intuitions are infallible when they are certain. For example, if I imagine a prototypical (black) raven, and I mentally make it grey, I would still call it a raven. My semantic intuition here represents just a disposition to use the term associated with the concept. If someone then convinces me to call only black birds ravens, that wouldn’t be a counterexample to infallibility, that would just be me using a different concept than before for the same term. For paradoxical cases the intuitions are typically far less than certain, and that reflects their being provisional.
For AI to do philosophy, according to the conceptual analysis view, it needs some ability to do thought experiments, to do suppositional reasoning, and to apply its usual concepts to these virtual situations. It also needs some minimal amount of “creativity” to come up with provisional definitions or axiomatizations, and specific thought experiments. Overall, I don’t think AI would need to learn doing philosophy from humans. Either it can do it itself, possibly at a superhuman level, because it is general enough to have the necessary base abilities, or it can’t do it much at all.