It might be ironic if you abuse the terms mind and territory in a way to just rehash dualism instead of the way it was intended in Science and Sanity. There are more layers of abstraction here then just two.
other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc.
So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers.
Then you have to spend a lot of time thinking about what other thermometers you are talking about. You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
>So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Worse, I don’t think trees meaningfully fall in forests that nobody ever visits.
>You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
I don’t know that that’s meaningful. Measurement is a social construct. If every thermometer since they were first invented had a constant 1 degree bias, there wouldn’t be a bias, our scale would just be different. It’s as meaningless as shifting the entire universe one foot to the left is. Who is to say that the majority is wrong and a minority is correct? And if there is some objective way to say that, then we can define the bias in terms of that objective way, like if we defined it in relation to some particular thermometer that’s declared to be perfect (not unlike how some measurements were actually defined for some time).
>You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
I mean, surely you see how questions like that might not be terribly meaningful until you operationalize it somehow? And as I’ve said, my theory does not differ in predictive ability, so if I’m reasoning worse in some respect but I get all the same predictions, what’s wrong?
It might be ironic if you abuse the terms mind and territory in a way to just rehash dualism instead of the way it was intended in Science and Sanity. There are more layers of abstraction here then just two.
So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Then you have to spend a lot of time thinking about what other thermometers you are talking about. You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
>So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Worse, I don’t think trees meaningfully fall in forests that nobody ever visits.
>You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
I don’t know that that’s meaningful. Measurement is a social construct. If every thermometer since they were first invented had a constant 1 degree bias, there wouldn’t be a bias, our scale would just be different. It’s as meaningless as shifting the entire universe one foot to the left is. Who is to say that the majority is wrong and a minority is correct? And if there is some objective way to say that, then we can define the bias in terms of that objective way, like if we defined it in relation to some particular thermometer that’s declared to be perfect (not unlike how some measurements were actually defined for some time).
>You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
I mean, surely you see how questions like that might not be terribly meaningful until you operationalize it somehow? And as I’ve said, my theory does not differ in predictive ability, so if I’m reasoning worse in some respect but I get all the same predictions, what’s wrong?