Reasoning like “well, we believe in an external reality because it seems plausible, and objective moral facts seem plausible, so we should believe in those too” is the sort of thing that sounds better the less you know about the epistemology of external reality. It really is a shame that more philosophers don’t know how Solomonoff induction works. No, it doesn’t get you off Neurath’s boat, but it sure as heck doesn’t look like intuitionism.
Can you cash a belief in moral realism out into disagreements with us about predictions? For example, if we meet aliens, does moral realism make different predictions about their morality than reductionist evolutionary biology? If we build an AI that starts with no moral intuitions, do you expect it to stumble upon the correct moral facts and then accept them, such that if we ran a thousand slightly different copies of the same program, they would all converge?
Can you cash a belief in moral realism out into disagreements with us about predictions?
If realism or quasi realism work better, for instance in preventing violent disputes about resource allocation, then societies are likely to converge on them. It’s easy to.show that realism is desireable, harder to show it is achievable.
. It really is a shame that more philosophers don’t know how Solomonoff induction works.
Does it work? SIs can only reject non natural hypotheses if they can test them, and only test them if they can express them in code. Can they? Note that the programmes in an SI can’t even represent continua/uncountability.
In order to carry out solomonoff induction, we presumably need mathematics. And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism. (What counts as intuitionism is very fraught, but on some understandings, mathematical realism is most plausibly reliant on an “intuitionism”) - see Justin Clark-Doane’s excellent book Morality and Mathematics for a discussion of this.
The typical mathematical realism I’ve encountered involves brazenly misunderstanding model theory. E.g. “Either PA is consistent or it’s not, but math can’t prove it because Godel’s theorem, so there are facts of the matter independent of proof, which must therefore be about real stuff.”
We can do math just fine with the much tamer model-theoretic sort of truth (one that says you can have a model where PA is consistent, and a model where it’s inconsistent, and they’re both okay). Being a realist about that sort of truth is relatively unobjectionable, but it probably doesn’t do anything fancy like supporting moral realism.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
Then you wouldn’t be rejecting intuitions to justify the other, as in omnizoid’s comment (you’d be using intuitions to reject the other). Also the prior comment uses the phrase “permitting moral realism”—I wouldn’t have taken this to imply REQUIRING moral realism, independent of intuitions.
If the claim is that it would be inconsistent to consider intuitions as a means of justification, but then reject them as a means of justification specifically with respect to moral realism, that would be inconsistent. But someone can endorse mathematical realism and not moral realism simply by finding the former and intuitive and not finding the latter intuitive. They could still acknowledge that intuitions could serve as a justification for moral realism if they had the intuition, but just lack the intuition.
Second, note that omnizoid originally said
And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism.
I don’t see anything tricky about this. One can be a normative antirealist and reject both epistemic and moral realism, because both are forms of normative realism, but not reject mathematical realism, because it isn’t a form of normative realism. In other words, one can consistently reject all forms of normative realism but not reject all forms of descriptive realism without any inconsistency.
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don’t think omnizoid would deny this.
On the second point—presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold “the epistemological standards for normativity are simply different from the mathematical standards, just because”. I don’t doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial “tricky” claim would hold up to the extent that identifying a relevant distinction is or isn’t “tricky”.
Right, I think we’re on the same page. I would just add that I happen to not think there’s anything especially tricky about rejecting normative realism in particular. Though I suppose it would depend on what’s meant by “tricky.” There’s construals on which I suppose I would think that. I’d be interested in omnizoid elaborating on that.
This was obviously not the extent of my argument for phenomenal conservatism.
What was wrong with the classification of anti-realists? If one is a realist they think that there are mind independent moral facts. Thus, to deny this, one needs to think either moral claims aren’t truth apt, they’re all false, or they depend on attitudes. I’ve read Eliezer’s stuff about morality, FWIW. If you want my ideological turing test of at least one version of anti-realism, here it is https://benthams.substack.com/p/sounding-like-an-anti-realist
Yes—though the predictions won’t settle it. Some things we’d predict of aliens is that they’d appreciate pleasure if they can experience it, that some of them would be utilitarians, and we’d also predict greater moral convergence over time. In particular, we’d expect a coherent formula to be able to characterize the moral views that are supported by the most reasons. I think if none of those things ended up being true, my credence in realism would decrease to around 60%.
Thus, to deny this, one needs to think either moral claims aren’t truth apt, they’re all false, or they depend on attitudes.
No they don’t. The standard claim that all antirealist positions are either relativism, error theory, or noncognitivism is false: it requires antirealist positions to include a semantic claim about the meaning of moral claims.
But an antirealist can both deny that there are stance-independent moral facts, and deny the philosophical presuppositions implicit in the claim that there is some kind of correct analysis of moral claims, such that moral claims are either truth apt, all false, or depend on attitudes. Also, an antirealist can endorse indeterminacy about the meaning of moral claims, and maintain that they aren’t determinately truth-apt, false, or dependent on attitudes. For an example, see:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
I agree with this—one can think some claims aren’t truth apt, others false, others dependent on attitudes. The claim is that collectively these have to cover all moral claims.
I’m explicitly denying that that covers all the possibilities. You can also endorse incoherentism or indeterminacy.
Also, when you say that the claims aren’t truth-apt, are you supposing that the claims themselves have a meaning, or that the person who made the claim means to communicate something with a given moral utterance?
What was wrong with the classification of anti-realists?
Where would “Morality as fixed computation” fit in your typology? Or metaethical constructivism? Like, it’s fine to dunk on error theorists or relativists all you want, but it’s not real relevant to LW. Individual subjectivism is sort of closer, but I would have liked to see a typology that included things LWers might actually endorse.
As another example of something not fitting in your typology, consider the rules of baseball. We all agree baseball is socially constructed—it’s not trying to conform to some Platonic ideal, the rules could easily have been different, they arose through some social process, etc. And yet facts about baseball are also pretty solid—it’s not a matter of opinion whether it takes three strikes or four to get a batter out.
You might say that baseball is in fact culturally relativist. After all, society came up with the rules of baseball in the first place, and has agreed to change the rules of baseball before.
But suppose the Nazis had won the war, and in this alternate history they forced everyone to play baseball with a large ball filled with air, and there was no pitcher or batters, instead you gained points by getting the ball through a goal guarded by the opposing team, and you weren’t allowed to touch the ball with your arms. It should seem obvious that what is going on is not that the Nazis made it true that “in baseball you kick the ball with your feet.” All they did was outlaw baseball entirely, and force everyone to play soccer football. When the alternate-reality Nazis say “Baseball is played with eleven players on a side,” they’re simply not talking about baseball. So is baseball non-cognitivist, because the Nazis’ statements aren’t actually about the thing they syntactically seem to be about? But again, when you talk about baseball, you’re capable of making perfectly good true or false statements.
Where would “Morality as fixed computation” fit in your typology?
Failing to address most of the issues. Theres nothing about whether everyone has the same computation , and there’s nothing about how to resolve conflicts, if they don’t. There’s also nothing about obligation or punishment....
Well-written, if wrong :P Thanks!
Reasoning like “well, we believe in an external reality because it seems plausible, and objective moral facts seem plausible, so we should believe in those too” is the sort of thing that sounds better the less you know about the epistemology of external reality. It really is a shame that more philosophers don’t know how Solomonoff induction works. No, it doesn’t get you off Neurath’s boat, but it sure as heck doesn’t look like intuitionism.
You really didn’t pass the ideological Turing test in “Classifying anti-realists.” Here, have some reading recommendations: 2-place and 1-place words. Probability is subjectively objective. Math is subjunctively obective. Morality as fixed computation.
Can you cash a belief in moral realism out into disagreements with us about predictions? For example, if we meet aliens, does moral realism make different predictions about their morality than reductionist evolutionary biology? If we build an AI that starts with no moral intuitions, do you expect it to stumble upon the correct moral facts and then accept them, such that if we ran a thousand slightly different copies of the same program, they would all converge?
If realism or quasi realism work better, for instance in preventing violent disputes about resource allocation, then societies are likely to converge on them. It’s easy to.show that realism is desireable, harder to show it is achievable.
Does it work? SIs can only reject non natural hypotheses if they can test them, and only test them if they can express them in code. Can they? Note that the programmes in an SI can’t even represent continua/uncountability.
In order to carry out solomonoff induction, we presumably need mathematics. And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism. (What counts as intuitionism is very fraught, but on some understandings, mathematical realism is most plausibly reliant on an “intuitionism”) - see Justin Clark-Doane’s excellent book Morality and Mathematics for a discussion of this.
The typical mathematical realism I’ve encountered involves brazenly misunderstanding model theory. E.g. “Either PA is consistent or it’s not, but math can’t prove it because Godel’s theorem, so there are facts of the matter independent of proof, which must therefore be about real stuff.”
We can do math just fine with the much tamer model-theoretic sort of truth (one that says you can have a model where PA is consistent, and a model where it’s inconsistent, and they’re both okay). Being a realist about that sort of truth is relatively unobjectionable, but it probably doesn’t do anything fancy like supporting moral realism.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
Which proofs are possible depends on your model of PA! In non-standard models, you can have proofs coded for by non-standard numbers.
More LW posts: https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
So you don’t think there’s a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim “this proof doesn’t objectively lack proof-hood”?
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
Why? Can you endorse mathematical realism, but reject all forms of normative realism, including epistemic and moral realism?
Yes! The claim is that if you use intuitions to justify one but reject intuitions to justify the other, that will be inconsistent.
What’s the inconsistency? You could have an intuition that mathematical realism is true, and that moral realism isn’t.
Then you wouldn’t be rejecting intuitions to justify the other, as in omnizoid’s comment (you’d be using intuitions to reject the other). Also the prior comment uses the phrase “permitting moral realism”—I wouldn’t have taken this to imply REQUIRING moral realism, independent of intuitions.
If the claim is that it would be inconsistent to consider intuitions as a means of justification, but then reject them as a means of justification specifically with respect to moral realism, that would be inconsistent. But someone can endorse mathematical realism and not moral realism simply by finding the former and intuitive and not finding the latter intuitive. They could still acknowledge that intuitions could serve as a justification for moral realism if they had the intuition, but just lack the intuition.
Second, note that omnizoid originally said
I don’t see anything tricky about this. One can be a normative antirealist and reject both epistemic and moral realism, because both are forms of normative realism, but not reject mathematical realism, because it isn’t a form of normative realism. In other words, one can consistently reject all forms of normative realism but not reject all forms of descriptive realism without any inconsistency.
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don’t think omnizoid would deny this.
On the second point—presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold “the epistemological standards for normativity are simply different from the mathematical standards, just because”. I don’t doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial “tricky” claim would hold up to the extent that identifying a relevant distinction is or isn’t “tricky”.
Right, I think we’re on the same page. I would just add that I happen to not think there’s anything especially tricky about rejecting normative realism in particular. Though I suppose it would depend on what’s meant by “tricky.” There’s construals on which I suppose I would think that. I’d be interested in omnizoid elaborating on that.
Thanks for the reply and kind words!
This was obviously not the extent of my argument for phenomenal conservatism.
What was wrong with the classification of anti-realists? If one is a realist they think that there are mind independent moral facts. Thus, to deny this, one needs to think either moral claims aren’t truth apt, they’re all false, or they depend on attitudes. I’ve read Eliezer’s stuff about morality, FWIW. If you want my ideological turing test of at least one version of anti-realism, here it is https://benthams.substack.com/p/sounding-like-an-anti-realist
Yes—though the predictions won’t settle it. Some things we’d predict of aliens is that they’d appreciate pleasure if they can experience it, that some of them would be utilitarians, and we’d also predict greater moral convergence over time. In particular, we’d expect a coherent formula to be able to characterize the moral views that are supported by the most reasons. I think if none of those things ended up being true, my credence in realism would decrease to around 60%.
No they don’t. The standard claim that all antirealist positions are either relativism, error theory, or noncognitivism is false: it requires antirealist positions to include a semantic claim about the meaning of moral claims.
But an antirealist can both deny that there are stance-independent moral facts, and deny the philosophical presuppositions implicit in the claim that there is some kind of correct analysis of moral claims, such that moral claims are either truth apt, all false, or depend on attitudes. Also, an antirealist can endorse indeterminacy about the meaning of moral claims, and maintain that they aren’t determinately truth-apt, false, or dependent on attitudes. For an example, see:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
I agree with this—one can think some claims aren’t truth apt, others false, others dependent on attitudes. The claim is that collectively these have to cover all moral claims.
I’m explicitly denying that that covers all the possibilities. You can also endorse incoherentism or indeterminacy.
Also, when you say that the claims aren’t truth-apt, are you supposing that the claims themselves have a meaning, or that the person who made the claim means to communicate something with a given moral utterance?
Where would “Morality as fixed computation” fit in your typology? Or metaethical constructivism? Like, it’s fine to dunk on error theorists or relativists all you want, but it’s not real relevant to LW. Individual subjectivism is sort of closer, but I would have liked to see a typology that included things LWers might actually endorse.
As another example of something not fitting in your typology, consider the rules of baseball. We all agree baseball is socially constructed—it’s not trying to conform to some Platonic ideal, the rules could easily have been different, they arose through some social process, etc. And yet facts about baseball are also pretty solid—it’s not a matter of opinion whether it takes three strikes or four to get a batter out.
You might say that baseball is in fact culturally relativist. After all, society came up with the rules of baseball in the first place, and has agreed to change the rules of baseball before.
But suppose the Nazis had won the war, and in this alternate history they forced everyone to play baseball with a large ball filled with air, and there was no pitcher or batters, instead you gained points by getting the ball through a goal guarded by the opposing team, and you weren’t allowed to touch the ball with your arms. It should seem obvious that what is going on is not that the Nazis made it true that “in baseball you kick the ball with your feet.” All they did was outlaw baseball entirely, and force everyone to play
soccerfootball. When the alternate-reality Nazis say “Baseball is played with eleven players on a side,” they’re simply not talking about baseball. So is baseball non-cognitivist, because the Nazis’ statements aren’t actually about the thing they syntactically seem to be about? But again, when you talk about baseball, you’re capable of making perfectly good true or false statements.Probably some type of relativism.
Failing to address most of the issues. Theres nothing about whether everyone has the same computation , and there’s nothing about how to resolve conflicts, if they don’t. There’s also nothing about obligation or punishment....
https://www.lesswrong.com/posts/FnJPa8E9ZG5xiLLp5/morality-as-fixed-computation