First, I think the “sufficiently-reflective” part dramatically weakens the general claim that “is requires ought”; reflectivity is a very strong requirement which even humans often don’t satisfy (i.e. how often do most humans reflect on their beliefs?)
Second, while I basically agree with the Fristonian set-point argument, I think there’s a lot of unjustified conclusions trying to sneak in by calling that an “ought”. For instance, if we rewrite:
Indeed, it is hard for claims such as “Fermat’s last theorem is true” to even be meaningful without oughts.
as
Indeed, it is hard for claims such as “Fermat’s last theorem is true” to even be meaningful without Fristonian set-points.
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
First, I think the “sufficiently-reflective” part dramatically weakens the general claim
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”. It’s hard to make any kind of general claim about them.
The reflectivity constraint is essentially “for each ‘is’ claim you believe, you must believe that the claim was produced by something that systematically produces true claims”, i.e. you must have some justification for its truth according to some internal representation.
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Interpreting mathematical notation requires set-points. There’s a correct interpretation of +, and if you don’t adhere to it, you’ll interpret the text of the theorem wrong.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
Even after you’ve already interpreted the theorem, keeping the denotation around in your mind requires a set point of “preserve memories”, and set points for faithfully accessing past memories.
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”.
I am not talking about incoherent agents, I am talking about agents which are coherent but not reflective. To the extent that we expect coherence to be instrumentally useful and reflection to be difficult, that’s exactly the sort of agent we should expect evolution to produce most often.
Most humans seem to have mostly-accurate beliefs, without thinking at all about whether those beliefs were systematically produced by something which produces accurate beliefs.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
It’s not at all obvious that representations and interpretations need to be implemented as set-points, or are equivalent to set points, or anything like that. That’s the claim which would be interesting to prove.
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition. If you believe your beliefs come from source X that does not systematically produce correct beliefs, then your beliefs don’t cohere.
This can be seen in terms of Bayesianism. Let R[X] stand for “My system reports X is true”. There is no distribution P (joint over X,R[X]) such that P(X|R[X])=1 and P(X) = 0.5 and P(R[X] | X) = 1 and P(R[X] | not X) = 1.
That’s the claim which would be interesting to prove.
Here’s my attempt at a proof:
Let A stand for some reflective reasonable agent.
Axiom 1: A believes X, and A believes that A believes X.
Axiom 2: A believes that if A believes X, then there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. [argument: A has internal justifications for beliefs being systematically correct. A is essential to the system because A’s beliefs are a result of the system; if not for A’s work, such beliefs would not be systematically correct]
Axiom 3: A believes that, for all epistemic systems Y that contain A as an essential component and function well, A functions well as part of Y. [argument: A is essential to Y’s functioning]
Axiom 4: For all epistemic systems Y, if A believes that Y is an epistemic system that contains A as an essential component, and also that A functions well as part of Y, then A believes that A is trying to function well as part of Y. [argument: good functioning doesn’t happen accidentally, it’s a narrow target to hit. Anyway, accidental functioning wouldn’t justify the belief; the argument has to be that the belief is systematically, not accidentally, correct.]
Axiom 5: A believes that, for all epistemic systems Y, if A is trying to function well as part of Y, then A has a set-point of functioning well as part of Y. [argument: set-point is the same as trying]
Axiom 6: For all epistemic systems Y, if A believes A has a set-point of functioning well as part of Y, then A has a set-point of functioning well as part of Y. [argument: otherwise A is incoherent; it believes itself to have a set-point it doesn’t have]
Theorem 1: A believes that there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. (Follows from Axiom 1, Axiom 2)
Theorem 2: A believes that A functions well as part of Y. (Follows from Axiom 3, Theorem 1)
Theorem 3: A believes that A is trying to function well as part of Y. (Follows from Axiom 4, Theorem 2)
Theorem 4: A believes A has a set-point of functioning well as part of Y. (Follows from Axiom 5, Theorem 3)
Theorem 5: A has a set-point of functioning well as part of Y. (Follows from Axiom 6, Theorem 4)
Theorem 6: A has some set-point. (Follows from Theorem 5)
(Note, consider X = “Fermat’s last theorem universally quantifies over all triples of natural numbers”; “Fermat’s last theorem” is not meaningful to A if A lacks knowledge of X)
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition.
This is only if you have some kind of completeness or logical omniscience kind of condition, requiring us to have beliefs about reflective statements at all. It’s entirely possible to only have beliefs over a limited class of statements—most animals don’t even have a concept of reflection, yet they have beliefs which match reality. One need not have any beliefs at all about the sources of one’s beliefs.
As for the proof, seems like the interesting part would be providing deeper foundations for axioms 4 and 5. Those are the parts which seem like they could fail.
First, I think the “sufficiently-reflective” part dramatically weakens the general claim that “is requires ought”; reflectivity is a very strong requirement which even humans often don’t satisfy (i.e. how often do most humans reflect on their beliefs?)
Second, while I basically agree with the Fristonian set-point argument, I think there’s a lot of unjustified conclusions trying to sneak in by calling that an “ought”. For instance, if we rewrite:
as
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”. It’s hard to make any kind of general claim about them.
The reflectivity constraint is essentially “for each ‘is’ claim you believe, you must believe that the claim was produced by something that systematically produces true claims”, i.e. you must have some justification for its truth according to some internal representation.
Interpreting mathematical notation requires set-points. There’s a correct interpretation of +, and if you don’t adhere to it, you’ll interpret the text of the theorem wrong.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
Even after you’ve already interpreted the theorem, keeping the denotation around in your mind requires a set point of “preserve memories”, and set points for faithfully accessing past memories.
I am not talking about incoherent agents, I am talking about agents which are coherent but not reflective. To the extent that we expect coherence to be instrumentally useful and reflection to be difficult, that’s exactly the sort of agent we should expect evolution to produce most often.
Most humans seem to have mostly-accurate beliefs, without thinking at all about whether those beliefs were systematically produced by something which produces accurate beliefs.
It’s not at all obvious that representations and interpretations need to be implemented as set-points, or are equivalent to set points, or anything like that. That’s the claim which would be interesting to prove.
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition. If you believe your beliefs come from source X that does not systematically produce correct beliefs, then your beliefs don’t cohere.
This can be seen in terms of Bayesianism. Let R[X] stand for “My system reports X is true”. There is no distribution P (joint over X,R[X]) such that P(X|R[X])=1 and P(X) = 0.5 and P(R[X] | X) = 1 and P(R[X] | not X) = 1.
Here’s my attempt at a proof:
Let A stand for some reflective reasonable agent.
Axiom 1: A believes X, and A believes that A believes X.
Axiom 2: A believes that if A believes X, then there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. [argument: A has internal justifications for beliefs being systematically correct. A is essential to the system because A’s beliefs are a result of the system; if not for A’s work, such beliefs would not be systematically correct]
Axiom 3: A believes that, for all epistemic systems Y that contain A as an essential component and function well, A functions well as part of Y. [argument: A is essential to Y’s functioning]
Axiom 4: For all epistemic systems Y, if A believes that Y is an epistemic system that contains A as an essential component, and also that A functions well as part of Y, then A believes that A is trying to function well as part of Y. [argument: good functioning doesn’t happen accidentally, it’s a narrow target to hit. Anyway, accidental functioning wouldn’t justify the belief; the argument has to be that the belief is systematically, not accidentally, correct.]
Axiom 5: A believes that, for all epistemic systems Y, if A is trying to function well as part of Y, then A has a set-point of functioning well as part of Y. [argument: set-point is the same as trying]
Axiom 6: For all epistemic systems Y, if A believes A has a set-point of functioning well as part of Y, then A has a set-point of functioning well as part of Y. [argument: otherwise A is incoherent; it believes itself to have a set-point it doesn’t have]
Theorem 1: A believes that there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. (Follows from Axiom 1, Axiom 2)
Theorem 2: A believes that A functions well as part of Y. (Follows from Axiom 3, Theorem 1)
Theorem 3: A believes that A is trying to function well as part of Y. (Follows from Axiom 4, Theorem 2)
Theorem 4: A believes A has a set-point of functioning well as part of Y. (Follows from Axiom 5, Theorem 3)
Theorem 5: A has a set-point of functioning well as part of Y. (Follows from Axiom 6, Theorem 4)
Theorem 6: A has some set-point. (Follows from Theorem 5)
(Note, consider X = “Fermat’s last theorem universally quantifies over all triples of natural numbers”; “Fermat’s last theorem” is not meaningful to A if A lacks knowledge of X)
This is only if you have some kind of completeness or logical omniscience kind of condition, requiring us to have beliefs about reflective statements at all. It’s entirely possible to only have beliefs over a limited class of statements—most animals don’t even have a concept of reflection, yet they have beliefs which match reality. One need not have any beliefs at all about the sources of one’s beliefs.
As for the proof, seems like the interesting part would be providing deeper foundations for axioms 4 and 5. Those are the parts which seem like they could fail.