I do think the positivists went too far. They failed to realise that we can make predictions about things which we can never test. We can never evaluate these predictions, and we can never update our models on the basis of them, but we can still make them in the same way as we make any other predictions.
For example, consider the claim “a pink rhinoceros rides a unicycle around the Andromeda galaxy, he travels much faster than light and so completes a whole circuit of the galaxy every 42 hours. He is, of course, far too small for our telescopes to see.”
The positivist says “meaningless!”
I say “meaningful, very high probability of being false”
Another thing they shouldn’t have dismissed is counterfactuals. As Pearl showed, questions about counterfactuals can be reduced to Bayesian questions of fact.
Part of the business of science is to create successively better explanations of the world we live in. What its nature is, and so on.
I sympathise with this. To some extent I may have been exaggerating my own position in my last few posts, it happens to me occasionally. I do think that predictions are the only way of entangling your beliefs with reality, of creating a state of the world where what you believe is causally affected by what is true. Without that you have no way to attain a map that reflects the territory, any epistemology which claims you do is guilty of making stuff up.
1) It can be phrased as a prediction. “I predict if someone had no way to evaluate their predictions based on evidence they would have no way of attaining a map that reflects the territory. They would have no way of attaining a belief-set that works better in this world than in the average of all possible worlds”.
2) It is a mathematical statement, or at any rate the logical implication of a mathematical statement, and thus is probably true in all possible worlds so I am not trying to entangle it with the territory.
I understand, but disagree. The point I have been trying to make is that it does.
My original claim was that an agent’s outcome was determined solely by that agent’s predictions and the external world in which that agent lived. If you define a theory so that its predictive content is a strict subset of all the predictions which can be derived from it then yes, its predictive content is not all that matters, the other predictions matter as well.
It nonetheless remains the case that what happens to an agent is determined by that agent’s predictions. You need to understand that theories are not fundamentally Bayesian concepts, so it is much better to argue Bayes at either the statement-level or the agent-level than the theory-level.
In addition, I think our debate is starting to annoy everyone else here. There have been times when the entire recent comments bar is filled with comments from one of us, which is considered bad form.
I do think the positivists went too far. They failed to realise that we can make predictions about things which we can never test. We can never evaluate these predictions, and we can never update our models on the basis of them, but we can still make them in the same way as we make any other predictions.
For example, consider the claim “a pink rhinoceros rides a unicycle around the Andromeda galaxy, he travels much faster than light and so completes a whole circuit of the galaxy every 42 hours. He is, of course, far too small for our telescopes to see.”
The positivist says “meaningless!”
I say “meaningful, very high probability of being false”
Another thing they shouldn’t have dismissed is counterfactuals. As Pearl showed, questions about counterfactuals can be reduced to Bayesian questions of fact.
I sympathise with this. To some extent I may have been exaggerating my own position in my last few posts, it happens to me occasionally. I do think that predictions are the only way of entangling your beliefs with reality, of creating a state of the world where what you believe is causally affected by what is true. Without that you have no way to attain a map that reflects the territory, any epistemology which claims you do is guilty of making stuff up.
I do not agree with this assertion.
Some things I note about it:
1) it isn’t phrased as a prediction
2) it isn’t phrased as an argument based on empirical evidence
Would you like to try rewriting it more carefully?
1) It can be phrased as a prediction. “I predict if someone had no way to evaluate their predictions based on evidence they would have no way of attaining a map that reflects the territory. They would have no way of attaining a belief-set that works better in this world than in the average of all possible worlds”.
2) It is a mathematical statement, or at any rate the logical implication of a mathematical statement, and thus is probably true in all possible worlds so I am not trying to entangle it with the territory.
If Y can be phrased as a prediction, it does not follow that Y is the predictive content of X. Do you understand?
I understand, but disagree. The point I have been trying to make is that it does.
My original claim was that an agent’s outcome was determined solely by that agent’s predictions and the external world in which that agent lived. If you define a theory so that its predictive content is a strict subset of all the predictions which can be derived from it then yes, its predictive content is not all that matters, the other predictions matter as well.
It nonetheless remains the case that what happens to an agent is determined by that agent’s predictions. You need to understand that theories are not fundamentally Bayesian concepts, so it is much better to argue Bayes at either the statement-level or the agent-level than the theory-level.
In addition, I think our debate is starting to annoy everyone else here. There have been times when the entire recent comments bar is filled with comments from one of us, which is considered bad form.
Could we continue this somewhere else?
Yes. I PMed you somewhere yesterday. Did you get it?