Their brains seem to produce useful maps of the world without ever worrying about what they “ought” to do.
How do you know they don’t have beliefs about what they ought to do (in the sense of: following of norms, principles, etc)? Of course their ‘ought’s won’t be the same as humans’, but neither are their ’is’es.
(Anyway, they probably aren’t reflective philosophical agents, so the arguments given probably don’t apply to them, although they do apply to philosophical humans reasoning about the knowledge of cats)
Again, we could say that they’re implicitly assuming their eyes are there for presenting accurate information, but that interpretation doesn’t seem to pay any rent, and could just as easily apply to a rock.
We can apply mentalistic interpretations to cats or not. According to the best mentalistic interpretation I know of, they would not act on the basis of their vision (e.g. in navigating around obstacles) if they didn’t believe their vision to be providing them with information about the world. If we don’t apply a mentalistic interpretation, there is nothing to say about their ‘is’es or ’ought’s, or indeed their world-models.
Applying mentalistic interpretations to rocks is not illuminating.
Again, this sounds like a very contrived “ought” interpretation—so contrived that it could just as easily apply to a rock.
Yes, if I’m treating the rock as a tool; that’s the point.
Couldn’t we just completely ignore the entire subject of this post and generally expect to see the same things in the world?
“We should only discuss those things that constrain expectations” is an ought claim.
Anyway, “you can’t justifiably believe you have checked a math proof without following oughts” constrains expectations.
Ok, I think I’m starting to see the point of confusion here. You’re treating a “mentalistic interpretation” as a package deal which includes both is’s and ought’s. But it’s completely possible for a map to correspond to a territory separate from any objectives, goals or oughts. It’s even possible for a system to reliably produce a map which matches a territory without any oughts—see e.g. embedded naive Bayes for a very rough example.
It’s even possible for a system to reliably produce a map which matches a territory without any oughts—see e.g. embedded naive Bayes for a very rough example.
I don’t think that’s the same idea. Assigning “beliefs” to PA requires assigning an interpretation to them; the embedded naive Bayes post argues that certain systems cannot be assigned certain interpretations.
No, it’s saying that there is no possible interpretation of the system’s behavior in which it behaves like PA—not just that a particular interpretation fails to match.
I’m not saying that they’re don’t exist things which behave like PA.
I’m saying that there exist things which cannot be interpreted as behaving like PA, under any interpretation (where “interpretation” = homomorphism). On the other hand, there are also things which do behave like PA. So, there is a rigorous sense in which some systems do embed PA, and others do not.
The same concept yields a general notion of “is”, entirely independent of any notion of “ought”: we have some system which takes in a “territory”, and produces a (supposed) “map” of the territory. For some such systems, there is not any interpretation whatsoever under which the “map” produced will actually match the territory. For other systems, there is an interpretation under which the map matches the territory. So, there is a rigorous sense in which some systems produce accurate maps of territory, and others do not, entirely independent of any “ought” claims.
I agree that once you have a fixed abstract algorithm A and abstract algorithm B, it may or may not be the case that there exists a homomorphism from A to B justifying the claim that A implements B. Sorry for misunderstanding.
But the main point in my PA comment still stands: to have justified belief that some theorem prover implements PA, a philosophical mathematician must follow oughts.
(When you’re talking about naive Bayes or a theorem prover as if it has “a map” you’re applying a teleological interpretation (that that object is supposed to correspond with some territory / be coherent / etc), which is not simply a function of the algorithm itself)
Sufficiently-reflective reasonable agents that make internally-justified “is” claims also accept at least some Fristonian set-points (what Friston calls “predictions”), such as “my beliefs must be logically coherent”. (I don’t accept the whole of Friston’s theory; I’m trying to gesture at the idea of “acting in order to control some value into satisfying some property”)
If a reasonable agent has a Fristonian set point for some X the agent has control over, then that agent believes “X ought to happen”.
I don’t know if you disagree with either of these points.
First, I think the “sufficiently-reflective” part dramatically weakens the general claim that “is requires ought”; reflectivity is a very strong requirement which even humans often don’t satisfy (i.e. how often do most humans reflect on their beliefs?)
Second, while I basically agree with the Fristonian set-point argument, I think there’s a lot of unjustified conclusions trying to sneak in by calling that an “ought”. For instance, if we rewrite:
Indeed, it is hard for claims such as “Fermat’s last theorem is true” to even be meaningful without oughts.
as
Indeed, it is hard for claims such as “Fermat’s last theorem is true” to even be meaningful without Fristonian set-points.
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
First, I think the “sufficiently-reflective” part dramatically weakens the general claim
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”. It’s hard to make any kind of general claim about them.
The reflectivity constraint is essentially “for each ‘is’ claim you believe, you must believe that the claim was produced by something that systematically produces true claims”, i.e. you must have some justification for its truth according to some internal representation.
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Interpreting mathematical notation requires set-points. There’s a correct interpretation of +, and if you don’t adhere to it, you’ll interpret the text of the theorem wrong.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
Even after you’ve already interpreted the theorem, keeping the denotation around in your mind requires a set point of “preserve memories”, and set points for faithfully accessing past memories.
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”.
I am not talking about incoherent agents, I am talking about agents which are coherent but not reflective. To the extent that we expect coherence to be instrumentally useful and reflection to be difficult, that’s exactly the sort of agent we should expect evolution to produce most often.
Most humans seem to have mostly-accurate beliefs, without thinking at all about whether those beliefs were systematically produced by something which produces accurate beliefs.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
It’s not at all obvious that representations and interpretations need to be implemented as set-points, or are equivalent to set points, or anything like that. That’s the claim which would be interesting to prove.
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition. If you believe your beliefs come from source X that does not systematically produce correct beliefs, then your beliefs don’t cohere.
This can be seen in terms of Bayesianism. Let R[X] stand for “My system reports X is true”. There is no distribution P (joint over X,R[X]) such that P(X|R[X])=1 and P(X) = 0.5 and P(R[X] | X) = 1 and P(R[X] | not X) = 1.
That’s the claim which would be interesting to prove.
Here’s my attempt at a proof:
Let A stand for some reflective reasonable agent.
Axiom 1: A believes X, and A believes that A believes X.
Axiom 2: A believes that if A believes X, then there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. [argument: A has internal justifications for beliefs being systematically correct. A is essential to the system because A’s beliefs are a result of the system; if not for A’s work, such beliefs would not be systematically correct]
Axiom 3: A believes that, for all epistemic systems Y that contain A as an essential component and function well, A functions well as part of Y. [argument: A is essential to Y’s functioning]
Axiom 4: For all epistemic systems Y, if A believes that Y is an epistemic system that contains A as an essential component, and also that A functions well as part of Y, then A believes that A is trying to function well as part of Y. [argument: good functioning doesn’t happen accidentally, it’s a narrow target to hit. Anyway, accidental functioning wouldn’t justify the belief; the argument has to be that the belief is systematically, not accidentally, correct.]
Axiom 5: A believes that, for all epistemic systems Y, if A is trying to function well as part of Y, then A has a set-point of functioning well as part of Y. [argument: set-point is the same as trying]
Axiom 6: For all epistemic systems Y, if A believes A has a set-point of functioning well as part of Y, then A has a set-point of functioning well as part of Y. [argument: otherwise A is incoherent; it believes itself to have a set-point it doesn’t have]
Theorem 1: A believes that there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. (Follows from Axiom 1, Axiom 2)
Theorem 2: A believes that A functions well as part of Y. (Follows from Axiom 3, Theorem 1)
Theorem 3: A believes that A is trying to function well as part of Y. (Follows from Axiom 4, Theorem 2)
Theorem 4: A believes A has a set-point of functioning well as part of Y. (Follows from Axiom 5, Theorem 3)
Theorem 5: A has a set-point of functioning well as part of Y. (Follows from Axiom 6, Theorem 4)
Theorem 6: A has some set-point. (Follows from Theorem 5)
(Note, consider X = “Fermat’s last theorem universally quantifies over all triples of natural numbers”; “Fermat’s last theorem” is not meaningful to A if A lacks knowledge of X)
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition.
This is only if you have some kind of completeness or logical omniscience kind of condition, requiring us to have beliefs about reflective statements at all. It’s entirely possible to only have beliefs over a limited class of statements—most animals don’t even have a concept of reflection, yet they have beliefs which match reality. One need not have any beliefs at all about the sources of one’s beliefs.
As for the proof, seems like the interesting part would be providing deeper foundations for axioms 4 and 5. Those are the parts which seem like they could fail.
How do you know they don’t have beliefs about what they ought to do (in the sense of: following of norms, principles, etc)? Of course their ‘ought’s won’t be the same as humans’, but neither are their ’is’es.
(Anyway, they probably aren’t reflective philosophical agents, so the arguments given probably don’t apply to them, although they do apply to philosophical humans reasoning about the knowledge of cats)
We can apply mentalistic interpretations to cats or not. According to the best mentalistic interpretation I know of, they would not act on the basis of their vision (e.g. in navigating around obstacles) if they didn’t believe their vision to be providing them with information about the world. If we don’t apply a mentalistic interpretation, there is nothing to say about their ‘is’es or ’ought’s, or indeed their world-models.
Applying mentalistic interpretations to rocks is not illuminating.
Yes, if I’m treating the rock as a tool; that’s the point.
“We should only discuss those things that constrain expectations” is an ought claim.
Anyway, “you can’t justifiably believe you have checked a math proof without following oughts” constrains expectations.
Ok, I think I’m starting to see the point of confusion here. You’re treating a “mentalistic interpretation” as a package deal which includes both is’s and ought’s. But it’s completely possible for a map to correspond to a territory separate from any objectives, goals or oughts. It’s even possible for a system to reliably produce a map which matches a territory without any oughts—see e.g. embedded naive Bayes for a very rough example.
See what I wrote about PA theorem provers, it’s the same idea.
I don’t think that’s the same idea. Assigning “beliefs” to PA requires assigning an interpretation to them; the embedded naive Bayes post argues that certain systems cannot be assigned certain interpretations.
That’s another way of saying that some claims of “X implements Y” are definitely false, no?
“This computer implements PA” is false if it outputs something that is not a theorem of PA, e.g. because of a hardware or software bug.
No, it’s saying that there is no possible interpretation of the system’s behavior in which it behaves like PA—not just that a particular interpretation fails to match.
Doesn’t a correct PA theorem prover behave like a bounded approximation of PA?
I’m not saying that they’re don’t exist things which behave like PA.
I’m saying that there exist things which cannot be interpreted as behaving like PA, under any interpretation (where “interpretation” = homomorphism). On the other hand, there are also things which do behave like PA. So, there is a rigorous sense in which some systems do embed PA, and others do not.
The same concept yields a general notion of “is”, entirely independent of any notion of “ought”: we have some system which takes in a “territory”, and produces a (supposed) “map” of the territory. For some such systems, there is not any interpretation whatsoever under which the “map” produced will actually match the territory. For other systems, there is an interpretation under which the map matches the territory. So, there is a rigorous sense in which some systems produce accurate maps of territory, and others do not, entirely independent of any “ought” claims.
I agree that once you have a fixed abstract algorithm A and abstract algorithm B, it may or may not be the case that there exists a homomorphism from A to B justifying the claim that A implements B. Sorry for misunderstanding.
But the main point in my PA comment still stands: to have justified belief that some theorem prover implements PA, a philosophical mathematician must follow oughts.
(When you’re talking about naive Bayes or a theorem prover as if it has “a map” you’re applying a teleological interpretation (that that object is supposed to correspond with some territory / be coherent / etc), which is not simply a function of the algorithm itself)
To summarize my argument:
Sufficiently-reflective reasonable agents that make internally-justified “is” claims also accept at least some Fristonian set-points (what Friston calls “predictions”), such as “my beliefs must be logically coherent”. (I don’t accept the whole of Friston’s theory; I’m trying to gesture at the idea of “acting in order to control some value into satisfying some property”)
If a reasonable agent has a Fristonian set point for some X the agent has control over, then that agent believes “X ought to happen”.
I don’t know if you disagree with either of these points.
First, I think the “sufficiently-reflective” part dramatically weakens the general claim that “is requires ought”; reflectivity is a very strong requirement which even humans often don’t satisfy (i.e. how often do most humans reflect on their beliefs?)
Second, while I basically agree with the Fristonian set-point argument, I think there’s a lot of unjustified conclusions trying to sneak in by calling that an “ought”. For instance, if we rewrite:
as
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Incoherent agents can have all manner of beliefs such as “1+1=3” and “fish are necessarily green” and “eels are not eels”. It’s hard to make any kind of general claim about them.
The reflectivity constraint is essentially “for each ‘is’ claim you believe, you must believe that the claim was produced by something that systematically produces true claims”, i.e. you must have some justification for its truth according to some internal representation.
Interpreting mathematical notation requires set-points. There’s a correct interpretation of +, and if you don’t adhere to it, you’ll interpret the text of the theorem wrong.
In interpreting the notation into a mental representation of the theorem, you need set points like “represent the theorem as a grammatical structure following these rules” and “interpret for-all claims as applying to each individual”.
Even after you’ve already interpreted the theorem, keeping the denotation around in your mind requires a set point of “preserve memories”, and set points for faithfully accessing past memories.
I am not talking about incoherent agents, I am talking about agents which are coherent but not reflective. To the extent that we expect coherence to be instrumentally useful and reflection to be difficult, that’s exactly the sort of agent we should expect evolution to produce most often.
Most humans seem to have mostly-accurate beliefs, without thinking at all about whether those beliefs were systematically produced by something which produces accurate beliefs.
It’s not at all obvious that representations and interpretations need to be implemented as set-points, or are equivalent to set points, or anything like that. That’s the claim which would be interesting to prove.
But believing one’s own beliefs to come from a source that systematically produces correct beliefs is a coherence condition. If you believe your beliefs come from source X that does not systematically produce correct beliefs, then your beliefs don’t cohere.
This can be seen in terms of Bayesianism. Let R[X] stand for “My system reports X is true”. There is no distribution P (joint over X,R[X]) such that P(X|R[X])=1 and P(X) = 0.5 and P(R[X] | X) = 1 and P(R[X] | not X) = 1.
Here’s my attempt at a proof:
Let A stand for some reflective reasonable agent.
Axiom 1: A believes X, and A believes that A believes X.
Axiom 2: A believes that if A believes X, then there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. [argument: A has internal justifications for beliefs being systematically correct. A is essential to the system because A’s beliefs are a result of the system; if not for A’s work, such beliefs would not be systematically correct]
Axiom 3: A believes that, for all epistemic systems Y that contain A as an essential component and function well, A functions well as part of Y. [argument: A is essential to Y’s functioning]
Axiom 4: For all epistemic systems Y, if A believes that Y is an epistemic system that contains A as an essential component, and also that A functions well as part of Y, then A believes that A is trying to function well as part of Y. [argument: good functioning doesn’t happen accidentally, it’s a narrow target to hit. Anyway, accidental functioning wouldn’t justify the belief; the argument has to be that the belief is systematically, not accidentally, correct.]
Axiom 5: A believes that, for all epistemic systems Y, if A is trying to function well as part of Y, then A has a set-point of functioning well as part of Y. [argument: set-point is the same as trying]
Axiom 6: For all epistemic systems Y, if A believes A has a set-point of functioning well as part of Y, then A has a set-point of functioning well as part of Y. [argument: otherwise A is incoherent; it believes itself to have a set-point it doesn’t have]
Theorem 1: A believes that there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. (Follows from Axiom 1, Axiom 2)
Theorem 2: A believes that A functions well as part of Y. (Follows from Axiom 3, Theorem 1)
Theorem 3: A believes that A is trying to function well as part of Y. (Follows from Axiom 4, Theorem 2)
Theorem 4: A believes A has a set-point of functioning well as part of Y. (Follows from Axiom 5, Theorem 3)
Theorem 5: A has a set-point of functioning well as part of Y. (Follows from Axiom 6, Theorem 4)
Theorem 6: A has some set-point. (Follows from Theorem 5)
(Note, consider X = “Fermat’s last theorem universally quantifies over all triples of natural numbers”; “Fermat’s last theorem” is not meaningful to A if A lacks knowledge of X)
This is only if you have some kind of completeness or logical omniscience kind of condition, requiring us to have beliefs about reflective statements at all. It’s entirely possible to only have beliefs over a limited class of statements—most animals don’t even have a concept of reflection, yet they have beliefs which match reality. One need not have any beliefs at all about the sources of one’s beliefs.
As for the proof, seems like the interesting part would be providing deeper foundations for axioms 4 and 5. Those are the parts which seem like they could fail.