But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts.
Suppose I ask
What is rationality?
Is UDT the right decision theory?
What is the right philosophy of mathematics?
Am I asking about physical facts or logical/mathematical facts? It seems like I’m asking about a third category of “philosophical facts”.
We could say that the answer to “what is rationality” is whatever my meta-rationality computes, and hence reduce it to a physical+logical fact, but that really doesn’t seem to help at all.
These all sound to me like logical questions where you don’t have conscious access to the premises you’re using, and can only try to figure out the premises by looking at what seem like good or bad conclusions. But with respect to the general question of whether we are talking about (a) the way events are or (b) which conclusions follow from which premises, it sounds like we’re doing the latter. Other “philosophical” questions (like ‘What’s up with the Born probabilities?’ or ‘How should I compute anthropic probabilities?’) may actually be about (a).
Your answer seemed wrong to me, but it took me a long time to verbalize why. In the end, I think it’s a map/territory confusion.
For comparison, suppose I’m trying to find the shortest way from home to work by visualizing a map of the city. I’m doing a computation in my mind, which can also be viewed as deriving implications from a set of premises. But that computation is about something external; and the answer isn’t just a logical fact about what conclusions follow from certain premises.
When I ask myself “what is rationality?” I think the computation I’m doing in my head is also about something external to me, and it’s not just a logical question where I don’t have conscious access to the premises that I’m using, even though that’s also the case.
So my definition of moral realism would be that when I do the meta-moral computation of asking “what moral premises should I accept?”, that computation is about something that is not just inside my head. I think this is closer to what most people mean by the phrase.
Given the above, I think your meta-ethics is basically a denial of moral realism, but in such a way that it causes more confusion than clarity. Your position, if translated into the “shortest way to work” example, would be if someone told you that there is no fact of the matter about the shortest way to work because the whole city is just a figment of your imagination, and you reply that there is a fact of the matter about the computation in your mind, and that’s good enough for you to call yourself a realist.
When I ask myself “what is rationality?” I think the computation I’m doing in my head is also about something external to me
Well, if you’re asking about human rationality, then the prudent-way-to-think involves lots of empirical info about the actual flaws in human cognition, and so on. If you’re asking about rationality in the sense of probability theory, then the only reference to the actual that I can discern is about anthropics and possibly prudent priors—things like the Dutch Book Argument are math, which we find compelling because of our values.
If you think that we’re referring to something else—what is it, where is it stored? Is there a stone tablet somewhere on which these things are written, on which I can scrawl graffiti to alter the very fabric of rationality? Probably not—so where are the facts that the discourse is about, in your view?
I think “what is rationality” (and by that I mean ideal rationality) is like “does P=NP”. There is some fact of the matter about it that is independent of what premises we choose to, or happen to, accept. I wish I knew where these facts live, or exactly how it is that we have any ability to determine them, but I don’t. Fortunately, I don’t think that really weakens my argument much.
This is exactly what I refer to as a “logical fact” or “which conclusions follow from which premises”. Wasn’t that clear?
Actually, I guess it could be a bit less clear if you’re not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links, i.e., if the axioms are true of a model then the theorem is true of that model. Which is, I think, conventional in mathematics, but I suppose it could be less obvious.
In the case of P!=NP, you’ll still need some axioms to prove it, and the axioms will identify the subject matter—they will let you talk about computations and running time, just as the Peano axioms identify the subject matter of the integers. It’s not that you can make 2 + 2 = 5 by believing differently about the same subject matter, but that different axioms would cause you to be talking about a different subject matter than what we name the “integers”.
Actually, I guess it could be a bit less clear if you’re not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links
But that’s not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they’re just parts of math that we don’t understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question what he’s doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He’s going to answer that he’s trying to determine whether P!=NP. If those axioms don’t work out, he’ll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that’s “out there”. But according to your meta-ethics, there is no “out there” for morality. So why does it deserve to be called realism?
Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we’re not sure how that works)?
I don’t see anything here that is not a mixture of physical facts and logical facts (that is, truths about causal events and truths about premise-conclusion links). Physical computers within our universe may be neatly described by compact axioms. Logic (in my not-uncommon view) deals with semantic implication: what is true in a model given that the axioms are true of it. If you prove P!=NP using axioms that happen to apply to the computers of this universe then P!=NP for them as well, and the axioms will have been picked out to be applicable to real physics—a mixture of physical fact and logical fact. I don’t know where logical facts are stored or what they are, just as I don’t yet know what makes the universe real, although I repose some confidence that the previous two questions are wrong—but so far I’m standing by my view that truths are about causal events, logical implications, or some mix of the two.
Axioms are that which mathematicians use to talk about integers instead of something else. You could also take the perspective of trying to talk about groups of two pebbles as they exist in the real world, and wanting your axioms to correspond to their behavior. But when you stop looking at the real world and close your eyes and try to do math, then in order to do math about something, like about the integers, about these abstract objects of thought that you abstracted away from the groups of pebbles, you need axioms that identify the integers in mathspace. And having thus gained a subject of discourse, you can use the axioms to prove theorems that are about integers because the theorems hold wherever the axioms hold. And if those axioms are true of physical reality from the appropriate standpoint, your conclusions will also hold of groups of pebbles.
Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we’re not sure how that works)?
That depends; is morality a subject matter that we need premises to identify in subjectspace, in order to talk about morality rather than something else, stored in that same mysterious place as 2 + 2 = 4 being true of the integers but needing axioms to talk about the integers in the first place? Or are we talking about transcendent ineffable compelling stuff? The first view is, I think, coherent; I should think so, it’s my own. The second view is not.
I don’t see anything here that is not a mixture of physical facts and logical facts (that is, truths about causal events and truths about premise-conclusion links).
Eliezer, a couple of comments ago I switched my focus from whether there is more than just physical and logical facts to whether “morality” refers to something independent of humanity, like (as I claimed) “rationality”, “integer” and “P!=NP” do. Sorry if I didn’t make that clear, and I hope I’m not being logically rude here, but the topic is confusing to me and I’m trying different lines of thought. (BTW, what kind of fact is it that there are only two kinds of facts?)
Quoting some background from Wikipedia:
When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a “natural number”. Henri Poincaré was more cautious, saying they only defined natural numbers if they were consistent; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don’t define anything.
My question is, how can these questions even arise in our minds, unless we already had a notion of “natural number” that is independent of Peano axioms? There is something about integers that compels us to think about them, and the compelling force is not a set of axioms that is stored in our minds or spread virally from one mathematician to another.
Maybe the compelling force is that in the world that we live in, there are objects (like pebbles) whose behaviors can be approximated by the behavior of integers. I (in apparent disagreement with you) think this isn’t the only compelling force (i.e., aliens who live in a world with no discrete objects would still invent integers), but it’s enough to establish that when we talk about integers we’re talking about something at least partly outside of ourselves.
To restate my position, I think it’s unlikely that “morality” refers to anything outside of us, but many people do believe that, and I can’t rule it out conclusively myself (especially given Toby Ord’s recent comments).
Actually, I guess it could be a bit less clear if you’re not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links
But that’s not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they’re just parts of math that we don’t understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He’s going to answer that he’s trying to determine whether P!=NP. If those axioms don’t work out, he’ll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that’s “out there”. But according to your meta-ethics, there is no “out there” for morality. So why does it deserve to be called realism?
ETA2: Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we’re not sure how that works)?
Suppose I ask
What is rationality?
Is UDT the right decision theory?
What is the right philosophy of mathematics?
Am I asking about physical facts or logical/mathematical facts? It seems like I’m asking about a third category of “philosophical facts”.
We could say that the answer to “what is rationality” is whatever my meta-rationality computes, and hence reduce it to a physical+logical fact, but that really doesn’t seem to help at all.
These all sound to me like logical questions where you don’t have conscious access to the premises you’re using, and can only try to figure out the premises by looking at what seem like good or bad conclusions. But with respect to the general question of whether we are talking about (a) the way events are or (b) which conclusions follow from which premises, it sounds like we’re doing the latter. Other “philosophical” questions (like ‘What’s up with the Born probabilities?’ or ‘How should I compute anthropic probabilities?’) may actually be about (a).
Your answer seemed wrong to me, but it took me a long time to verbalize why. In the end, I think it’s a map/territory confusion.
For comparison, suppose I’m trying to find the shortest way from home to work by visualizing a map of the city. I’m doing a computation in my mind, which can also be viewed as deriving implications from a set of premises. But that computation is about something external; and the answer isn’t just a logical fact about what conclusions follow from certain premises.
When I ask myself “what is rationality?” I think the computation I’m doing in my head is also about something external to me, and it’s not just a logical question where I don’t have conscious access to the premises that I’m using, even though that’s also the case.
So my definition of moral realism would be that when I do the meta-moral computation of asking “what moral premises should I accept?”, that computation is about something that is not just inside my head. I think this is closer to what most people mean by the phrase.
Given the above, I think your meta-ethics is basically a denial of moral realism, but in such a way that it causes more confusion than clarity. Your position, if translated into the “shortest way to work” example, would be if someone told you that there is no fact of the matter about the shortest way to work because the whole city is just a figment of your imagination, and you reply that there is a fact of the matter about the computation in your mind, and that’s good enough for you to call yourself a realist.
Well, if you’re asking about human rationality, then the prudent-way-to-think involves lots of empirical info about the actual flaws in human cognition, and so on. If you’re asking about rationality in the sense of probability theory, then the only reference to the actual that I can discern is about anthropics and possibly prudent priors—things like the Dutch Book Argument are math, which we find compelling because of our values.
If you think that we’re referring to something else—what is it, where is it stored? Is there a stone tablet somewhere on which these things are written, on which I can scrawl graffiti to alter the very fabric of rationality? Probably not—so where are the facts that the discourse is about, in your view?
I think “what is rationality” (and by that I mean ideal rationality) is like “does P=NP”. There is some fact of the matter about it that is independent of what premises we choose to, or happen to, accept. I wish I knew where these facts live, or exactly how it is that we have any ability to determine them, but I don’t. Fortunately, I don’t think that really weakens my argument much.
This is exactly what I refer to as a “logical fact” or “which conclusions follow from which premises”. Wasn’t that clear?
Actually, I guess it could be a bit less clear if you’re not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links, i.e., if the axioms are true of a model then the theorem is true of that model. Which is, I think, conventional in mathematics, but I suppose it could be less obvious.
In the case of P!=NP, you’ll still need some axioms to prove it, and the axioms will identify the subject matter—they will let you talk about computations and running time, just as the Peano axioms identify the subject matter of the integers. It’s not that you can make 2 + 2 = 5 by believing differently about the same subject matter, but that different axioms would cause you to be talking about a different subject matter than what we name the “integers”.
Is this starting to sound a little familiar?
But that’s not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they’re just parts of math that we don’t understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question what he’s doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He’s going to answer that he’s trying to determine whether P!=NP. If those axioms don’t work out, he’ll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that’s “out there”. But according to your meta-ethics, there is no “out there” for morality. So why does it deserve to be called realism?
Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we’re not sure how that works)?
I don’t see anything here that is not a mixture of physical facts and logical facts (that is, truths about causal events and truths about premise-conclusion links). Physical computers within our universe may be neatly described by compact axioms. Logic (in my not-uncommon view) deals with semantic implication: what is true in a model given that the axioms are true of it. If you prove P!=NP using axioms that happen to apply to the computers of this universe then P!=NP for them as well, and the axioms will have been picked out to be applicable to real physics—a mixture of physical fact and logical fact. I don’t know where logical facts are stored or what they are, just as I don’t yet know what makes the universe real, although I repose some confidence that the previous two questions are wrong—but so far I’m standing by my view that truths are about causal events, logical implications, or some mix of the two.
Axioms are that which mathematicians use to talk about integers instead of something else. You could also take the perspective of trying to talk about groups of two pebbles as they exist in the real world, and wanting your axioms to correspond to their behavior. But when you stop looking at the real world and close your eyes and try to do math, then in order to do math about something, like about the integers, about these abstract objects of thought that you abstracted away from the groups of pebbles, you need axioms that identify the integers in mathspace. And having thus gained a subject of discourse, you can use the axioms to prove theorems that are about integers because the theorems hold wherever the axioms hold. And if those axioms are true of physical reality from the appropriate standpoint, your conclusions will also hold of groups of pebbles.
That depends; is morality a subject matter that we need premises to identify in subjectspace, in order to talk about morality rather than something else, stored in that same mysterious place as 2 + 2 = 4 being true of the integers but needing axioms to talk about the integers in the first place? Or are we talking about transcendent ineffable compelling stuff? The first view is, I think, coherent; I should think so, it’s my own. The second view is not.
Eliezer, a couple of comments ago I switched my focus from whether there is more than just physical and logical facts to whether “morality” refers to something independent of humanity, like (as I claimed) “rationality”, “integer” and “P!=NP” do. Sorry if I didn’t make that clear, and I hope I’m not being logically rude here, but the topic is confusing to me and I’m trying different lines of thought. (BTW, what kind of fact is it that there are only two kinds of facts?)
Quoting some background from Wikipedia:
My question is, how can these questions even arise in our minds, unless we already had a notion of “natural number” that is independent of Peano axioms? There is something about integers that compels us to think about them, and the compelling force is not a set of axioms that is stored in our minds or spread virally from one mathematician to another.
Maybe the compelling force is that in the world that we live in, there are objects (like pebbles) whose behaviors can be approximated by the behavior of integers. I (in apparent disagreement with you) think this isn’t the only compelling force (i.e., aliens who live in a world with no discrete objects would still invent integers), but it’s enough to establish that when we talk about integers we’re talking about something at least partly outside of ourselves.
To restate my position, I think it’s unlikely that “morality” refers to anything outside of us, but many people do believe that, and I can’t rule it out conclusively myself (especially given Toby Ord’s recent comments).
Properly no they are not part of math, they are part of Computer Science, i.e. a description of how computations actually happen in the real world.
That is the missing piece that determines what axioms to use.
But that’s not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they’re just parts of math that we don’t understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He’s going to answer that he’s trying to determine whether P!=NP. If those axioms don’t work out, he’ll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that’s “out there”. But according to your meta-ethics, there is no “out there” for morality. So why does it deserve to be called realism?
ETA2: Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we’re not sure how that works)?