Those rascals. Never leave a question to philosophers unless you’re trying to drive up the next century’s employment statistics.
But why would there exist something outside a brain that has the same form as an idea? And even if such facts existed, how would ideas in the mind correspond to them? What is the mechanism of correspondence?
The missing abstraction here is isomorphism. Isomorphisms describe things that can be true in multiple systems simultaneously. How would the behavior of a light switch correspond to the behavior of another light switch, and to the behavior of one’s mental model of a light switch? An isomorphism common to each of the systems.
In consequentialism, knowledge is not viewed as a correspondence to reality, but as a means to effective action. Truth (the correctness or goodness of knowledge) is defined in terms of utility. If an idea is useful, then it is “true” for practical purposes.
This isn’t how it should be defined to get at the potential value of the pointed-to abstraction. Knowledge is based in correspondence to reality (isomorphisms between map and territory), valued for that correspondence. One is justified in believing things on the basis of utility, which doesn’t strictly depend on correctness. Truth goes on meaning the same thing it always did under consequentialism: accuracy to reality.
Consequentialism doesn’t decide what is considered to be true, itonly indicates that reasoning is justified on the basis of EV of the consequences of that reasoning, which usually takes on truth of the premises as a dependency, but not strictly so (one can be justified in believing something false to protect themselves). Consequentialism doesn’t necessarily try to identify what knowledge is. Inputs and outputs of Bayes theorem are knowledge insofar as they correspond to reality; a belief state need not be knowledge to be useful, and so consequentialism prescribes more general belief states than the set constituting knowledge.
How could the brain know whether an idea would have beneficial effects?
How could evolution know an eye has beneficial effects? The details about how this is achieved are both practical and mindblowing, and thus not in the domain of philosophy are explained by other disciplines of inquiry which all bottom out in the same abstractions.
One concludes beneficial effects by running EV calculations using an approximation of Bayes, which rodents do. One need not reason about what system is being used and whether or not there were alternatives to get one’s foot in the door. Rodents can have knowledge without having to jump up a meta level to observe correctness. We can still state what perfect knowledge would entail, namely achieving a fully updated reflective equilibrium via Bayes operating on the full context, generating the full mapping between contexts and outcomes, which includes all mathematical statements.
One might ask where the chain of justification bottoms out. Justification bottoms out in proofs of particular abstractions being non-fungible in reasoning wrt preferences as desiderata about that reasoning. Justification is initially demanded by preferences in the form of desiderata which form a membership function for justified reasoning, with those (idealized) desiderata provably uniquely specifying particular abstractions such as Bayes and EV, as well as how to use them. Starting from some complex knowledge states, one can notice they did not have other justified options for how to reason.
The relation between ideas and reality is representation, not correspondence.
Some representations are knowledge, but minds are general enough to entertain thoughts that don’t represent knowledge (such as endorsed reasoning that points to false statements). The representations which reflect correspondences to reality are those that constitute knowledge.
Humans use efficient representations of real objects, where one can conclude which representation they would use on the basis of EV in context, but this loses a lot in terms of clarity about what knowledge is if one isn’t considering how that representation is an approximation of ideal reasoning about real correspondences (isomorphisms between map and territory).
Are you saying that the mechanism of correspondence is an “isomorphism”? Can you please describe what the isomorphism is?
Knowledge is based in correspondence to reality
As the essay explains, knowledge doesn’t correspond to reality. Knowledge represents reality.
For example, it was considered “true” hundreds of years ago that the Sun revolved around the Earth. Everyone was as strongly convinced that that was a fact as we are now of the opposite belief. Would a Geocentrist be accurate if he confidently claimed that Geocentrism corresponds to reality? Of course not (from our perspective). Instead, what can be accurately said from both perspectives is that the belief that the Sun revolves around the Earth represents reality from the perspective of the people who believe it.
If a person believes in the barycentric coordinates theory, then they would judge the Geocentrism theory to be false. They are using a different model of reality to make that judgment. I believe in the barycentric coordinates theory, and that theory represents reality to me, from my perspective.
I could claim that barycentric coordinates corresponds to reality, but what does that mean? In practice, people who say “X corresponds to reality” are essentially saying “I think X is true”. This is problematic because saying “X is true” is also defined as “X corresponds to reality”. The so-called “correspondence” is not well-defined. People can’t even agree on what the “correct correspondences” to reality are.
A proper theory of knowledge has to explain why a Geocentrist judges the barycentric coordinates theory to be false. The answer is that a geocentrist and a proponent for barycentric coordinates both have different models of reality. Each person believes that their model of reality is true knowledge. To each person, their knowledge represents reality, from their perspective. That’s possible because knowledge is subjective. For more information, see: What is Subjectivity?
The problem with the Correspondence Theory of Truth/Knowledge is that it does not use the subject | object dichotomy to describe what knowledge is. Contrary to popular belief, Truth is not objective.
One is justified in believing things on the basis of utility
That is circular. Utility is a value judgment. Value judgments depend on truth judgments. Consequentialism doesn’t explain what the basis for truth judgments is, without using circular reasoning.
Truth goes on meaning the same thing it always did under consequentialism: accuracy to reality.
How do you judge “accuracy to reality”? People can disagree on what they think is accurate. You need a model of reality in order to judge “accuracy to reality”. And Representationalism is the best theory for describing how mental models of reality work. As I said before, the subject | object dichotomy is necessary for describing what knowledge is and how it works.
Consequentialism doesn’t necessarily try to identify what knowledge is.
Then Consequentialism is not a theory of knowledge.
a belief state need not be knowledge to be useful
Beliefs are knowledge.
How could evolution know an eye has beneficial effects?
That’s a metaphorical question. Evolution is not a subject. Evolution doesn’t know anything. Evolution is a process.
Some representations are knowledge
As the essay explained, knowledge is subjective.
The representations which reflect correspondences to reality are those that constitute knowledge.
Your brain is using a model of reality to make a truth judgment and statement. My brain is using a different model of reality that judges your statement to be wrong. I believe that you’re implicitly using the representationalism theory of knowledge to make this statement.
ideal reasoning
In order to define “ideal reasoning”, you need to define what’s “ideal”. What a person considers to be “ideal” is a value judgment. Value judgments are based on value knowledge. Value knowledge is a type of knowledge. Knowledge is subjective. Thus, ideal reasoning is subjective. It’s not possible to give an objective definition of “ideal reasoning”. And since you haven’t specified how you’re defining it, it’s not clear what you’re talking about.
If you believe that knowledge can be based on value judgments, then such knowledge would be subjective since value judgments are subjective.
This was a joke referencing academic philosophers rarely being motivated to pick satisfying answers in a time-dependent manner.
Are you saying that the mechanism of correspondence is an “isomorphism”? Can you please describe what the isomorphism is?
An isomorphism between two systems indicates those two systems implement a common mathematical structure—a light switch and one’s mental model of the light switch are both constrained by having implemented this mathematical structure such that their central behavior ends up the same. Even if your mental model of a light switch has two separate on and off buttons, and the real light switch you’re comparing against has a single connected toggle button, they’re implementing the same underlying mathematical structure and will behave the same. This allows us to talk about large particle configurations as though they were simple, because correct conclusions about how the system behaves can follow from only using the simplification.
One could communicate to another reasoner what to expect results from a physical system by only telling them what isomorphisms they ought to implement in their mental representation of the system.
Knowledge represents reality.
Yes, but more specifically knowledge is a representation with significant information value due to correspondence with reality, a subset of possible representations. Being a representation at all shouldn’t be sufficient to call something knowledge, if the word knowledge is to mean something different than what was already accounted for by having the words belief and representation.
One is justified in believing things on the basis of utility
That is circular. Utility is a value judgment. Value judgments depend on truth judgments. Consequentialism doesn’t explain what the basis for truth judgments is, without using circular reasoning.
As I said, consequentialism does not touch the assessment of what is true, it is only about the value judgment placed on beliefs. One can just snip out the part where there was a circular argument that consequentialism was seemingly responsible for invoking, and say consequentialism is just about justification (distinct from truth).
How do you judge “accuracy to reality”?
What a reasoner with all the context would see as reality, such that one can do this imperfectly with less context with that imperfection measured in distance from reality.
People can disagree on what they think is accurate.
This doesn’t call into question the ability to make such judgments from the perspective of a reasoner with all the context.
And Representationalism is the best theory for describing how mental models of reality work.
Still a bit foggy on how to distinguish it as a theory. If there is some other content to it besides rejecting direct realism, I could see how this might still be true, but I’m unaware of that content if it exists. I started out rejecting direct realism and (thus adopting what is purported to be representationalism), and don’t see how it could go any further in usefully constraining my beliefs.
As I said before, the subject | object dichotomy is necessary for describing what knowledge is and how it works.
I’m not aware of how this is the case; I’d guess this probably follows from using a different definition of knowledge.
Then Consequentialism is not a theory of knowledge.
Correct, unless you’re using knowledge to mean justified true belief, which some people do. I think all the abstractions attain maximal usefulness when knowledge just means true belief, such that justification is in the domain of consequentialism, a theory of the value of beliefs which sometimes handles contexts involving knowledge.
Beliefs are knowledge.
Potential abstraction value is lost by assigning ‘knowledge’ to all beliefs. Either only some beliefs are knowledge, or you’re looking for a theory of beliefs, which is better explained by examining the mechanics of Bayes and other abstractions.
How could evolution know an eye has beneficial effects?
That’s a metaphorical question. Evolution is not a subject. Evolution doesn’t know anything. Evolution is a process.
The answer was that a mind does not need to know why something works in order to implement something that works, and can end up with an implementation providing knowledge. It doesn’t depend on a meta-level assessment that the possessed knowledge is actually knowledge.
As the essay explained, knowledge is subjective.
One can have subjective representations of objective reality, but the knowledge is only knowledge insofar as it is about objective reality, which includes information related to the subject. I see no reason to let the abstraction point to anything subjective except insofar as the subjective is also objective.
Your brain is using a model of reality to make a truth judgment and statement. My brain is using a different model of reality that judges your statement to be wrong. I believe that you’re implicitly using the representationalism theory of knowledge to make this statement.
This wasn’t a truth claim, just me pointing at how I’d use ‘knowledge’ to mean the most useful thing it can mean. I’m defining knowledge to be only those beliefs which correspond to reality, such that another reasoner with all the context could determine that they actually had knowledge rather than false beliefs not constituting knowledge. We already have the word belief to mean the more general thing. Is knowledge distinct from beliefs in your ontology? What are the constraints that select down from beliefs to knowledge, if some beliefs are not knowledge?
In order to define “ideal reasoning”, you need to define what’s “ideal”. What a person considers to be “ideal” is a value judgment. Value judgments are based on value knowledge. Value knowledge is a type of knowledge. Knowledge is subjective. Thus, ideal reasoning is subjective. It’s not possible to give an objective definition of “ideal reasoning”. And since you haven’t specified how you’re defining it, it’s not clear what you’re talking about.
Ideal is exclusively with respect to preferences, yep. Knowledge is not subjective—there is a correct answer, and fully incorrect answers are not knowledge. Ideal reasoning, once you have pinned down which ideal reasoning you mean constrained by preference, is objective. There is a correct answer about how one would want to reason given their preferences; any subjectivity is an illusion.
I’m using ideal reasoning to mean that which is available to a reasoner (A), and is the reasoning that would be pointed to by a reasoner (B), if B possessed the whole mapping from contexts to outcomes with respect to A. B knows how to interpret the preferences of A as ways to distinguish outcome quality on the basis of any reasoning they’re doing, such that this is a sufficient constraint for B to select the ideal way for A to reason. It would then objectively be the ideal reasoning for A to adopt, but A doesn’t have to know anything about that.
If you believe that knowledge can be based on value judgments, then such knowledge would be subjective since value judgments are subjective.
Value judgments are a part of objective reality, and knowledge can be about that objective reality. Whether or not something is knowledge only conditions on truth value about the subjective content, not anything actually subjective that could entail false beliefs. Once you get to condition on preferences as objective qualities of the environment, one can have knowledge ‘based on’ value judgments that are still only about objective reality.
I’m not going to respond to most of what you wrote here because I think this will be an unproductive discussion.
What I will say is that I think it would help to reevaluate how you’re defining all the terms that you’re using. Many of your disagreements with the OP essay are semantic in nature. I believe that you will arrive at a richer and more nuanced understanding of epistemology if you learn the definitions used in the OP essay and the author’s blog and use those terms to understand epistemology instead. Many of the things that you wrote in your comment seem confused.
As for how you’re using subjective and objective, I recognize that there are various dictionary definitions for those two terms, but I believe that the most coherent ones that are the most useful for explaining epistemology are the ones that specifically relate to the subject | object dichotomy. You’re disagreeing with the statement “knowledge is subjective” because you’re not defining “subjective” according to the subject | object dichotomy.
reevaluate how you’re defining all the terms that you’re using
Always a good idea. As for why I’m pointing to EV: epistemic justification and expected value both entail scoring rules for ways to adopt beliefs. Combining both into the same model makes it easier to discuss epistemic justification in situations with reasoners with arbitrary utility functions and states of awareness.
Knowledge as mutual information between two models induced by some unspecified causal pathway allows me to talk about knowledge in situations where beliefs could follow from arbitrary causal pathways. I would exclude from my definition of knowledge false beliefs instilled by an agent which still produce the correct predictions, and I’d ensure my definition includes mutual information induced by a genuine divine revelation. (which is to say, I reject epistemic justification as a dependency)
Removing the criterion of being a belief seems to redraw the boundary around a lot of simple systems, but I don’t necessarily see a problem with that. ‘True’ follows from mutual information.
you’re not defining “subjective” according to the subject | object dichotomy
Seems so. I’m happy to instead avoid making claims about knowledge related to the subject-object dichotomy, as none of the reasoning I’d endorse here conditions on consciousness.
>They leave those questions “to the philosophers”
Those rascals. Never leave a question to philosophers unless you’re trying to drive up the next century’s employment statistics.
The missing abstraction here is isomorphism. Isomorphisms describe things that can be true in multiple systems simultaneously. How would the behavior of a light switch correspond to the behavior of another light switch, and to the behavior of one’s mental model of a light switch? An isomorphism common to each of the systems.
This isn’t how it should be defined to get at the potential value of the pointed-to abstraction. Knowledge is based in correspondence to reality (isomorphisms between map and territory), valued for that correspondence. One is justified in believing things on the basis of utility, which doesn’t strictly depend on correctness. Truth goes on meaning the same thing it always did under consequentialism: accuracy to reality.
Consequentialism doesn’t decide what is considered to be true, it only indicates that reasoning is justified on the basis of EV of the consequences of that reasoning, which usually takes on truth of the premises as a dependency, but not strictly so (one can be justified in believing something false to protect themselves). Consequentialism doesn’t necessarily try to identify what knowledge is. Inputs and outputs of Bayes theorem are knowledge insofar as they correspond to reality; a belief state need not be knowledge to be useful, and so consequentialism prescribes more general belief states than the set constituting knowledge.
How could evolution know an eye has beneficial effects? The details about how this is achieved are both practical and mindblowing, and
thus not in the domain of philosophyare explained by other disciplines of inquiry which all bottom out in the same abstractions.One concludes beneficial effects by running EV calculations using an approximation of Bayes, which rodents do. One need not reason about what system is being used and whether or not there were alternatives to get one’s foot in the door. Rodents can have knowledge without having to jump up a meta level to observe correctness. We can still state what perfect knowledge would entail, namely achieving a fully updated reflective equilibrium via Bayes operating on the full context, generating the full mapping between contexts and outcomes, which includes all mathematical statements.
One might ask where the chain of justification bottoms out. Justification bottoms out in proofs of particular abstractions being non-fungible in reasoning wrt preferences as desiderata about that reasoning. Justification is initially demanded by preferences in the form of desiderata which form a membership function for justified reasoning, with those (idealized) desiderata provably uniquely specifying particular abstractions such as Bayes and EV, as well as how to use them. Starting from some complex knowledge states, one can notice they did not have other justified options for how to reason.
Some representations are knowledge, but minds are general enough to entertain thoughts that don’t represent knowledge (such as endorsed reasoning that points to false statements). The representations which reflect correspondences to reality are those that constitute knowledge.
Humans use efficient representations of real objects, where one can conclude which representation they would use on the basis of EV in context, but this loses a lot in terms of clarity about what knowledge is if one isn’t considering how that representation is an approximation of ideal reasoning about real correspondences (isomorphisms between map and territory).
He was talking about academic philosophers.
Are you saying that the mechanism of correspondence is an “isomorphism”? Can you please describe what the isomorphism is?
As the essay explains, knowledge doesn’t correspond to reality. Knowledge represents reality.
For example, it was considered “true” hundreds of years ago that the Sun revolved around the Earth. Everyone was as strongly convinced that that was a fact as we are now of the opposite belief. Would a Geocentrist be accurate if he confidently claimed that Geocentrism corresponds to reality? Of course not (from our perspective). Instead, what can be accurately said from both perspectives is that the belief that the Sun revolves around the Earth represents reality from the perspective of the people who believe it.
If a person believes in the barycentric coordinates theory, then they would judge the Geocentrism theory to be false. They are using a different model of reality to make that judgment. I believe in the barycentric coordinates theory, and that theory represents reality to me, from my perspective.
I could claim that barycentric coordinates corresponds to reality, but what does that mean? In practice, people who say “X corresponds to reality” are essentially saying “I think X is true”. This is problematic because saying “X is true” is also defined as “X corresponds to reality”. The so-called “correspondence” is not well-defined. People can’t even agree on what the “correct correspondences” to reality are.
A proper theory of knowledge has to explain why a Geocentrist judges the barycentric coordinates theory to be false. The answer is that a geocentrist and a proponent for barycentric coordinates both have different models of reality. Each person believes that their model of reality is true knowledge. To each person, their knowledge represents reality, from their perspective. That’s possible because knowledge is subjective. For more information, see: What is Subjectivity?
The problem with the Correspondence Theory of Truth/Knowledge is that it does not use the subject | object dichotomy to describe what knowledge is. Contrary to popular belief, Truth is not objective.
That is circular. Utility is a value judgment. Value judgments depend on truth judgments. Consequentialism doesn’t explain what the basis for truth judgments is, without using circular reasoning.
How do you judge “accuracy to reality”? People can disagree on what they think is accurate. You need a model of reality in order to judge “accuracy to reality”. And Representationalism is the best theory for describing how mental models of reality work. As I said before, the subject | object dichotomy is necessary for describing what knowledge is and how it works.
Then Consequentialism is not a theory of knowledge.
Beliefs are knowledge.
That’s a metaphorical question. Evolution is not a subject. Evolution doesn’t know anything. Evolution is a process.
As the essay explained, knowledge is subjective.
Your brain is using a model of reality to make a truth judgment and statement. My brain is using a different model of reality that judges your statement to be wrong. I believe that you’re implicitly using the representationalism theory of knowledge to make this statement.
In order to define “ideal reasoning”, you need to define what’s “ideal”. What a person considers to be “ideal” is a value judgment. Value judgments are based on value knowledge. Value knowledge is a type of knowledge. Knowledge is subjective. Thus, ideal reasoning is subjective. It’s not possible to give an objective definition of “ideal reasoning”. And since you haven’t specified how you’re defining it, it’s not clear what you’re talking about.
If you believe that knowledge can be based on value judgments, then such knowledge would be subjective since value judgments are subjective.
This was a joke referencing academic philosophers rarely being motivated to pick satisfying answers in a time-dependent manner.
An isomorphism between two systems indicates those two systems implement a common mathematical structure—a light switch and one’s mental model of the light switch are both constrained by having implemented this mathematical structure such that their central behavior ends up the same. Even if your mental model of a light switch has two separate on and off buttons, and the real light switch you’re comparing against has a single connected toggle button, they’re implementing the same underlying mathematical structure and will behave the same. This allows us to talk about large particle configurations as though they were simple, because correct conclusions about how the system behaves can follow from only using the simplification.
One could communicate to another reasoner what to expect results from a physical system by only telling them what isomorphisms they ought to implement in their mental representation of the system.
Yes, but more specifically knowledge is a representation with significant information value due to correspondence with reality, a subset of possible representations. Being a representation at all shouldn’t be sufficient to call something knowledge, if the word knowledge is to mean something different than what was already accounted for by having the words belief and representation.
As I said, consequentialism does not touch the assessment of what is true, it is only about the value judgment placed on beliefs. One can just snip out the part where there was a circular argument that consequentialism was seemingly responsible for invoking, and say consequentialism is just about justification (distinct from truth).
What a reasoner with all the context would see as reality, such that one can do this imperfectly with less context with that imperfection measured in distance from reality.
This doesn’t call into question the ability to make such judgments from the perspective of a reasoner with all the context.
Still a bit foggy on how to distinguish it as a theory. If there is some other content to it besides rejecting direct realism, I could see how this might still be true, but I’m unaware of that content if it exists. I started out rejecting direct realism and (thus adopting what is purported to be representationalism), and don’t see how it could go any further in usefully constraining my beliefs.
I’m not aware of how this is the case; I’d guess this probably follows from using a different definition of knowledge.
Correct, unless you’re using knowledge to mean justified true belief, which some people do. I think all the abstractions attain maximal usefulness when knowledge just means true belief, such that justification is in the domain of consequentialism, a theory of the value of beliefs which sometimes handles contexts involving knowledge.
Potential abstraction value is lost by assigning ‘knowledge’ to all beliefs. Either only some beliefs are knowledge, or you’re looking for a theory of beliefs, which is better explained by examining the mechanics of Bayes and other abstractions.
The answer was that a mind does not need to know why something works in order to implement something that works, and can end up with an implementation providing knowledge. It doesn’t depend on a meta-level assessment that the possessed knowledge is actually knowledge.
One can have subjective representations of objective reality, but the knowledge is only knowledge insofar as it is about objective reality, which includes information related to the subject. I see no reason to let the abstraction point to anything subjective except insofar as the subjective is also objective.
This wasn’t a truth claim, just me pointing at how I’d use ‘knowledge’ to mean the most useful thing it can mean. I’m defining knowledge to be only those beliefs which correspond to reality, such that another reasoner with all the context could determine that they actually had knowledge rather than false beliefs not constituting knowledge. We already have the word belief to mean the more general thing. Is knowledge distinct from beliefs in your ontology? What are the constraints that select down from beliefs to knowledge, if some beliefs are not knowledge?
Ideal is exclusively with respect to preferences, yep. Knowledge is not subjective—there is a correct answer, and fully incorrect answers are not knowledge. Ideal reasoning, once you have pinned down which ideal reasoning you mean constrained by preference, is objective. There is a correct answer about how one would want to reason given their preferences; any subjectivity is an illusion.
I’m using ideal reasoning to mean that which is available to a reasoner (A), and is the reasoning that would be pointed to by a reasoner (B), if B possessed the whole mapping from contexts to outcomes with respect to A. B knows how to interpret the preferences of A as ways to distinguish outcome quality on the basis of any reasoning they’re doing, such that this is a sufficient constraint for B to select the ideal way for A to reason. It would then objectively be the ideal reasoning for A to adopt, but A doesn’t have to know anything about that.
Value judgments are a part of objective reality, and knowledge can be about that objective reality. Whether or not something is knowledge only conditions on truth value about the subjective content, not anything actually subjective that could entail false beliefs. Once you get to condition on preferences as objective qualities of the environment, one can have knowledge ‘based on’ value judgments that are still only about objective reality.
I’m not going to respond to most of what you wrote here because I think this will be an unproductive discussion.
What I will say is that I think it would help to reevaluate how you’re defining all the terms that you’re using. Many of your disagreements with the OP essay are semantic in nature. I believe that you will arrive at a richer and more nuanced understanding of epistemology if you learn the definitions used in the OP essay and the author’s blog and use those terms to understand epistemology instead. Many of the things that you wrote in your comment seem confused.
As for how you’re using subjective and objective, I recognize that there are various dictionary definitions for those two terms, but I believe that the most coherent ones that are the most useful for explaining epistemology are the ones that specifically relate to the subject | object dichotomy. You’re disagreeing with the statement “knowledge is subjective” because you’re not defining “subjective” according to the subject | object dichotomy.
I’ve also written a webpage that might help some of these concepts. You mentioned JTB in your response, and I’ve written a section explaining why JTB is not an adequate way to define knowledge at all.
Always a good idea. As for why I’m pointing to EV: epistemic justification and expected value both entail scoring rules for ways to adopt beliefs. Combining both into the same model makes it easier to discuss epistemic justification in situations with reasoners with arbitrary utility functions and states of awareness.
Knowledge as mutual information between two models induced by some unspecified causal pathway allows me to talk about knowledge in situations where beliefs could follow from arbitrary causal pathways. I would exclude from my definition of knowledge false beliefs instilled by an agent which still produce the correct predictions, and I’d ensure my definition includes mutual information induced by a genuine divine revelation. (which is to say, I reject epistemic justification as a dependency)
Removing the criterion of being a belief seems to redraw the boundary around a lot of simple systems, but I don’t necessarily see a problem with that. ‘True’ follows from mutual information.
Seems so. I’m happy to instead avoid making claims about knowledge related to the subject-object dichotomy, as none of the reasoning I’d endorse here conditions on consciousness.