By “process,” I don’t mean internal process of thought involving an inference from perceptions to beliefs about the world, I mean the actual perceptual and cognitive algorithm as a physical structure in the world. Because of the way the brain actually works in a deterministic universe, it ends up correlated with the external world. Perhaps this is unknowable to us “from the inside,” but the OP’s argument is not about external world skepticism given direct access only to what we perceive, but rather that given normal hypotheses about how the brain works, we should not trust the beliefs it generates. I am simply pointing out that this is false, because these normal hypotheses imply the kind of correlation that we want.
By “process,” I don’t mean internal process of thought involving an inference from perceptions to beliefs about the world, I mean the actual perceptual and cognitive algorithm as a physical structure in the world.
How do you know what that is? You don’t have the ability to stand outside the mind-world relationship and percieve it, any more than anything else. You have beliefs about the mind-world relationship, but they are all generated by inference in your mind. If there were some hard core of non-inferential knowledge about he ontological nature of reality, you might be able to lever it to gain more knowledge, but there isn’t because because the same objections apply.
Because of the way the brain actually works in a deterministic universe,
We don’t know that the universe is deterministic. You are confusing assumptions with knowledge.
it ends up correlated with the external world.
The point is about correspondence. Neither correlations nor predictive accuracy amount to correspondence to a definite ontology.
I am simply pointing out that this is false, because these normal hypotheses imply the kind of correlation that we want.
We dont’ want correlation, we want correspondence. Correlation isn’t causation, and it isn’t correspondence.
Assuming the scientific model doesn’t help, because the scientific model says that the way perceptions relate to the world is indirect, going through many intermediate causal stages. Since multiple things could possible give rise to the same perceptions, a unique cause (ie a definite ontology) can’t be inferred from perception alone.
How do you know what that is? You don’t have the ability to stand outside the mind-world relationship and perceive it, any more than anything else. You have beliefs about the mind-world relationship, but they are all generated by inference in your mind. If there were some hard core of non-inferential knowledge about he ontological nature of reality, you might be able to lever it to gain more knowledge, but there isn’t because because the same objections apply
I’m not making any claims about knowing what it is. The OP’s argument is that our normal deterministic model is self refuting because it undermines our ability to have knowledge, so the truth of the model can be assumed in the first place.
The point is about correspondence. Neither correlations nor predictive accuracy amount to correspondence to a definite ontology.
Yes, a large range of worlds with different ontologies imply the same observations. The further question of assigning probabilities to those different worlds comes down to how to assign initial priors, which is a serious epistemological problem. However, this seems unrelated to the point made in the OP, which is that determinism is self-undermining.
More broadly, I am confused as to what claim you think that I am making which you disagree with.
For what it’s worth I think there needs to be some clarification.
I didn’t say our model is deterministic nor should it be or not. And my argument is not about whether the correct definition of knowledge should be “justified true belief”. And unless I have had the wrong impression, I don’t think Sean Carrol’s focus is on the definition of knowledge either. Instead, it’s about what should be considered “true”.
The usual idea of a theory being true if it faithfully describes an underlying objective physical reality (deterministic or not) is problematic. It suffers the same pitfall of believing I am a Boltzmann brain. It is due to the dilemma that theories are produced and evaluated by worldly objects while their truth ought to be judged with “a view from nowhere”, a fundamentally objective perspective.
Start reasoning by recognizing I am a particular agent, then you will not have this problem. I don’t deny that. In fact, I think that is the solution to many paradoxes. But the majority of people would start reasoning from the “view from nowhere” and regard that as the only way. I think that is what has led people astray in many problems. Like decision paradoxes such as Newcomb, anthropics and to a degree, quantum interpretations.
More broadly, I am confused as to what claim you think that I am making which you disagree with.
What was the first thing you said I disagreed with?
“generated by a process makes that the ensuing map correlate with the territory.” In the world where we don’t have free will, but our beliefs are produced deterministically by our observations and our internal architecture in a way such that they are correlated with the world, we have all the knowledge that we need.
I disagree with all of that.
I disagree that the world is known to be deterministic.
I disagree that there you can found epistemology on ontology. You don’t know that the mind-world relationship works in a certain way absent having an epistemology that says to.
I disagree that we have all the knowledge we want or need.
I disagree that correlation is sufficient to solve the problem.
By “process,” I don’t mean internal process of thought involving an inference from perceptions to beliefs about the world, I mean the actual perceptual and cognitive algorithm as a physical structure in the world. Because of the way the brain actually works in a deterministic universe, it ends up correlated with the external world. Perhaps this is unknowable to us “from the inside,” but the OP’s argument is not about external world skepticism given direct access only to what we perceive, but rather that given normal hypotheses about how the brain works, we should not trust the beliefs it generates. I am simply pointing out that this is false, because these normal hypotheses imply the kind of correlation that we want.
How do you know what that is? You don’t have the ability to stand outside the mind-world relationship and percieve it, any more than anything else. You have beliefs about the mind-world relationship, but they are all generated by inference in your mind. If there were some hard core of non-inferential knowledge about he ontological nature of reality, you might be able to lever it to gain more knowledge, but there isn’t because because the same objections apply.
We don’t know that the universe is deterministic. You are confusing assumptions with knowledge.
The point is about correspondence. Neither correlations nor predictive accuracy amount to correspondence to a definite ontology.
We dont’ want correlation, we want correspondence. Correlation isn’t causation, and it isn’t correspondence.
Assuming the scientific model doesn’t help, because the scientific model says that the way perceptions relate to the world is indirect, going through many intermediate causal stages. Since multiple things could possible give rise to the same perceptions, a unique cause (ie a definite ontology) can’t be inferred from perception alone.
I’m not making any claims about knowing what it is. The OP’s argument is that our normal deterministic model is self refuting because it undermines our ability to have knowledge, so the truth of the model can be assumed in the first place.
Yes, a large range of worlds with different ontologies imply the same observations. The further question of assigning probabilities to those different worlds comes down to how to assign initial priors, which is a serious epistemological problem. However, this seems unrelated to the point made in the OP, which is that determinism is self-undermining.
More broadly, I am confused as to what claim you think that I am making which you disagree with.
For what it’s worth I think there needs to be some clarification.
I didn’t say our model is deterministic nor should it be or not. And my argument is not about whether the correct definition of knowledge should be “justified true belief”. And unless I have had the wrong impression, I don’t think Sean Carrol’s focus is on the definition of knowledge either. Instead, it’s about what should be considered “true”.
The usual idea of a theory being true if it faithfully describes an underlying objective physical reality (deterministic or not) is problematic. It suffers the same pitfall of believing I am a Boltzmann brain. It is due to the dilemma that theories are produced and evaluated by worldly objects while their truth ought to be judged with “a view from nowhere”, a fundamentally objective perspective.
Start reasoning by recognizing I am a particular agent, then you will not have this problem. I don’t deny that. In fact, I think that is the solution to many paradoxes. But the majority of people would start reasoning from the “view from nowhere” and regard that as the only way. I think that is what has led people astray in many problems. Like decision paradoxes such as Newcomb, anthropics and to a degree, quantum interpretations.
What was the first thing you said I disagreed with?
I disagree with all of that.
I disagree that the world is known to be deterministic.
I disagree that there you can found epistemology on ontology. You don’t know that the mind-world relationship works in a certain way absent having an epistemology that says to.
I disagree that we have all the knowledge we want or need.
I disagree that correlation is sufficient to solve the problem.