In order to carry out solomonoff induction, we presumably need mathematics. And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism. (What counts as intuitionism is very fraught, but on some understandings, mathematical realism is most plausibly reliant on an “intuitionism”) - see Justin Clark-Doane’s excellent book Morality and Mathematics for a discussion of this.
The typical mathematical realism I’ve encountered involves brazenly misunderstanding model theory. E.g. “Either PA is consistent or it’s not, but math can’t prove it because Godel’s theorem, so there are facts of the matter independent of proof, which must therefore be about real stuff.”
We can do math just fine with the much tamer model-theoretic sort of truth (one that says you can have a model where PA is consistent, and a model where it’s inconsistent, and they’re both okay). Being a realist about that sort of truth is relatively unobjectionable, but it probably doesn’t do anything fancy like supporting moral realism.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
Then you wouldn’t be rejecting intuitions to justify the other, as in omnizoid’s comment (you’d be using intuitions to reject the other). Also the prior comment uses the phrase “permitting moral realism”—I wouldn’t have taken this to imply REQUIRING moral realism, independent of intuitions.
If the claim is that it would be inconsistent to consider intuitions as a means of justification, but then reject them as a means of justification specifically with respect to moral realism, that would be inconsistent. But someone can endorse mathematical realism and not moral realism simply by finding the former and intuitive and not finding the latter intuitive. They could still acknowledge that intuitions could serve as a justification for moral realism if they had the intuition, but just lack the intuition.
Second, note that omnizoid originally said
And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism.
I don’t see anything tricky about this. One can be a normative antirealist and reject both epistemic and moral realism, because both are forms of normative realism, but not reject mathematical realism, because it isn’t a form of normative realism. In other words, one can consistently reject all forms of normative realism but not reject all forms of descriptive realism without any inconsistency.
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don’t think omnizoid would deny this.
On the second point—presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold “the epistemological standards for normativity are simply different from the mathematical standards, just because”. I don’t doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial “tricky” claim would hold up to the extent that identifying a relevant distinction is or isn’t “tricky”.
Right, I think we’re on the same page. I would just add that I happen to not think there’s anything especially tricky about rejecting normative realism in particular. Though I suppose it would depend on what’s meant by “tricky.” There’s construals on which I suppose I would think that. I’d be interested in omnizoid elaborating on that.
In order to carry out solomonoff induction, we presumably need mathematics. And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism. (What counts as intuitionism is very fraught, but on some understandings, mathematical realism is most plausibly reliant on an “intuitionism”) - see Justin Clark-Doane’s excellent book Morality and Mathematics for a discussion of this.
The typical mathematical realism I’ve encountered involves brazenly misunderstanding model theory. E.g. “Either PA is consistent or it’s not, but math can’t prove it because Godel’s theorem, so there are facts of the matter independent of proof, which must therefore be about real stuff.”
We can do math just fine with the much tamer model-theoretic sort of truth (one that says you can have a model where PA is consistent, and a model where it’s inconsistent, and they’re both okay). Being a realist about that sort of truth is relatively unobjectionable, but it probably doesn’t do anything fancy like supporting moral realism.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
Which proofs are possible depends on your model of PA! In non-standard models, you can have proofs coded for by non-standard numbers.
More LW posts: https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
So you don’t think there’s a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim “this proof doesn’t objectively lack proof-hood”?
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
Why? Can you endorse mathematical realism, but reject all forms of normative realism, including epistemic and moral realism?
Yes! The claim is that if you use intuitions to justify one but reject intuitions to justify the other, that will be inconsistent.
What’s the inconsistency? You could have an intuition that mathematical realism is true, and that moral realism isn’t.
Then you wouldn’t be rejecting intuitions to justify the other, as in omnizoid’s comment (you’d be using intuitions to reject the other). Also the prior comment uses the phrase “permitting moral realism”—I wouldn’t have taken this to imply REQUIRING moral realism, independent of intuitions.
If the claim is that it would be inconsistent to consider intuitions as a means of justification, but then reject them as a means of justification specifically with respect to moral realism, that would be inconsistent. But someone can endorse mathematical realism and not moral realism simply by finding the former and intuitive and not finding the latter intuitive. They could still acknowledge that intuitions could serve as a justification for moral realism if they had the intuition, but just lack the intuition.
Second, note that omnizoid originally said
I don’t see anything tricky about this. One can be a normative antirealist and reject both epistemic and moral realism, because both are forms of normative realism, but not reject mathematical realism, because it isn’t a form of normative realism. In other words, one can consistently reject all forms of normative realism but not reject all forms of descriptive realism without any inconsistency.
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don’t think omnizoid would deny this.
On the second point—presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold “the epistemological standards for normativity are simply different from the mathematical standards, just because”. I don’t doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial “tricky” claim would hold up to the extent that identifying a relevant distinction is or isn’t “tricky”.
Right, I think we’re on the same page. I would just add that I happen to not think there’s anything especially tricky about rejecting normative realism in particular. Though I suppose it would depend on what’s meant by “tricky.” There’s construals on which I suppose I would think that. I’d be interested in omnizoid elaborating on that.