It seems like you are both arguing that reality exist is false and that it’s meaningless. It’s worth keeping those apart.
When it comes to meaningness of terms EY wrote a more about truth then reality. He defends the usefulness of the term truth by asking
If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for ‘truth’?
He finds that he has good reasons to answer yes. In a similar regard it might be useful to tell an AI that has a notion that there’s a reality outside of itself that’s distinct from anything that AI knows and not directly assessible by the AI. It’s not necessary for the existince of reality to be verifiable for the AI for it being a good idea for the creator of the AI that knows something about reality to teach the AI about it.
Yes, I noted throughout the post that those are distinct. In the EY posts I’m responding to, he argues against both of those views separately. I am not arguing that the claim is false, I’m only arguing it’s meaningless. Not sure which part you think is arguing that it’s false, I can edit to clarify?
And so saying ‘I believe the sky is blue, and that’s true!’ typically conveys the same information as ‘I believe the sky is blue’ or just saying ‘The sky is blue’ - namely, that your mental model of the world contains a blue sky.
I agree with this. My own argument is that saying “my map contains a spaceship, and that spaceship actually exists” conveys the exact same information as “my map contains a spaceship”, and the second half of the statement is meaningless.
He argues that truth is a useful concept, which is not the same as arguing that it’s meaningful. His example AI could just as well have a concept of the expected and observed errors of its various maps without having a concept of an external reality. I think all of the ideas he claims truth are required to express can be reformulated in a way that doesn’t entail reality existing and remain just as useful. Solomonoff induction works without having a notion of “some maps are real.”
Other than that, he repeats arguments in The Simple Truth and the other posts that I already responded to. You don’t need to talk about an external reality in order to meaningfully talk about errors in your maps. You can just talk about the maps.
He argues that truth is a useful concept, which is not the same as arguing that it’s meaningful.
The term meaningful doesn’t appear in the post about beliefs paying rent. It’s unclear to me why it should be more important then whether a concept is useful for the base level.
His example AI could just as well have a concept of the expected and observed errors of its various maps without having a concept of an external reality.
Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It’s useful to have a concept that such biases exist.
>The term meaningful doesn’t appear in the post about beliefs paying rent. It’s unclear to me why it should be more important then whether a concept is useful for the base level.
If a concept is meaningless, but you continue to use it because it makes your mental processes simpler or is similarly useful, that’s a state of affairs that should be distinguished from the concept being meaningful. I’m not comparing importance here.
>Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It’s useful to have a concept that such biases exist.
I’m not sure what you mean by this bias? Can you give an example?
It’s ironic you picked that example, because temperature is explicitly socially constructed. There’s a handful of different definitions, but they’re all going to trace back to some particular set of observations.
Anyway, I’m interpreting your statement that some other set of thermometers will show 24C. I don’t know what you mean by “it’s 24 degrees C in reality”, other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc.
The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers. Why is a concept of “reality” useful, above and beyond that?
It might be ironic if you abuse the terms mind and territory in a way to just rehash dualism instead of the way it was intended in Science and Sanity. There are more layers of abstraction here then just two.
other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc.
So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers.
Then you have to spend a lot of time thinking about what other thermometers you are talking about. You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
>So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Worse, I don’t think trees meaningfully fall in forests that nobody ever visits.
>You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
I don’t know that that’s meaningful. Measurement is a social construct. If every thermometer since they were first invented had a constant 1 degree bias, there wouldn’t be a bias, our scale would just be different. It’s as meaningless as shifting the entire universe one foot to the left is. Who is to say that the majority is wrong and a minority is correct? And if there is some objective way to say that, then we can define the bias in terms of that objective way, like if we defined it in relation to some particular thermometer that’s declared to be perfect (not unlike how some measurements were actually defined for some time).
>You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
I mean, surely you see how questions like that might not be terribly meaningful until you operationalize it somehow? And as I’ve said, my theory does not differ in predictive ability, so if I’m reasoning worse in some respect but I get all the same predictions, what’s wrong?
If a concept is meaningless, but you continue to use it because it makes your mental processes simpler or is similarly useful, that’s a state of affairs that should be distinguished from the concept being meaningful
Or is useful for communicating ideas to other people..but, hang on, how can a meaningless concept be useful for communication? That just breaks the ordinary meaning of “meaning”.
EY argues that a particular literary claim can be meaningless, and yet you can still have a test and get graded based on your knowledge of that statement. This is in the posts that I linked at the very top of my post. Do you disagree with that claims, i.e. you think those claims are actually meaningful?
That sort of thing is testable. You get a bunch of literary professors in a Septuagint scenario, where they have to independently classify books as pre or post colonial. Why would that be impossible? It’s evidently possible to perform an easier version of the test where books are classified as romance, horror or Western.
That would be evidence. EYs personal opinion is not evidence , nor is yours.
Certainly if you get a bunch of physicists in a room they will disagree about what entities are real. So according to your proposed test, physics isn’t real?
There’s a huge difference between saying that a particular literay claim can be meaningless and saying that all of those claims are meaningless.
You can take the Sokal episode as an argument that if someone who doesn’t have the expert knowledge can easily pass as an expert then those experts don’t seem to have a lot of meaningful knowledge.
Different claims in that tradition are going to have a different status.
I’m not saying that all literary claims are meaningless. I’m saying all ontological claims are meaningless. Regardless, I’m responding to a comment which implied meaningless concepts cannot be useful for communication.
It seems like you are both arguing that reality exist is false and that it’s meaningless. It’s worth keeping those apart.
When it comes to meaningness of terms EY wrote a more about truth then reality. He defends the usefulness of the term truth by asking
He finds that he has good reasons to answer yes. In a similar regard it might be useful to tell an AI that has a notion that there’s a reality outside of itself that’s distinct from anything that AI knows and not directly assessible by the AI. It’s not necessary for the existince of reality to be verifiable for the AI for it being a good idea for the creator of the AI that knows something about reality to teach the AI about it.
Yes, I noted throughout the post that those are distinct. In the EY posts I’m responding to, he argues against both of those views separately. I am not arguing that the claim is false, I’m only arguing it’s meaningless. Not sure which part you think is arguing that it’s false, I can edit to clarify?
Re truth, I went back and reread the post you’re quoting from (https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs/p/XqvnWFtRD2keJdwjX)
I agree with this. My own argument is that saying “my map contains a spaceship, and that spaceship actually exists” conveys the exact same information as “my map contains a spaceship”, and the second half of the statement is meaningless.
He argues that truth is a useful concept, which is not the same as arguing that it’s meaningful. His example AI could just as well have a concept of the expected and observed errors of its various maps without having a concept of an external reality. I think all of the ideas he claims truth are required to express can be reformulated in a way that doesn’t entail reality existing and remain just as useful. Solomonoff induction works without having a notion of “some maps are real.”
Other than that, he repeats arguments in The Simple Truth and the other posts that I already responded to. You don’t need to talk about an external reality in order to meaningfully talk about errors in your maps. You can just talk about the maps.
The term meaningful doesn’t appear in the post about beliefs paying rent. It’s unclear to me why it should be more important then whether a concept is useful for the base level.
Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It’s useful to have a concept that such biases exist.
Sorry, missed this comment earlier.
>The term meaningful doesn’t appear in the post about beliefs paying rent. It’s unclear to me why it should be more important then whether a concept is useful for the base level.
If a concept is meaningless, but you continue to use it because it makes your mental processes simpler or is similarly useful, that’s a state of affairs that should be distinguished from the concept being meaningful. I’m not comparing importance here.
>Actors that try to minimze observed errors instead of aligning with external reality can easily minimize that error by introducing a bias in their observation where the bias makes the observation different from reality. It’s useful to have a concept that such biases exist.
I’m not sure what you mean by this bias? Can you give an example?
If it’s 24 degrees C in reality and my thermometer observes that it’s 22 degrees C then the thermometer has a bias.
It’s ironic you picked that example, because temperature is explicitly socially constructed. There’s a handful of different definitions, but they’re all going to trace back to some particular set of observations.
Anyway, I’m interpreting your statement that some other set of thermometers will show 24C. I don’t know what you mean by “it’s 24 degrees C in reality”, other than some set of predictions about what thermometers will show, or how fast water will boil/freeze, etc.
The bias can be conceptualized as the thermometer consistently showing a lower degree than other thermometers. Why is a concept of “reality” useful, above and beyond that?
It might be ironic if you abuse the terms mind and territory in a way to just rehash dualism instead of the way it was intended in Science and Sanity. There are more layers of abstraction here then just two.
So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Then you have to spend a lot of time thinking about what other thermometers you are talking about. You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
>So you think the tree that falls in the forest without someone to hear it doesn’t meaningfully make a sound?
Worse, I don’t think trees meaningfully fall in forests that nobody ever visits.
>You do get into problems in cases where the majority of measurements of a given thing share a measurement bias.
I don’t know that that’s meaningful. Measurement is a social construct. If every thermometer since they were first invented had a constant 1 degree bias, there wouldn’t be a bias, our scale would just be different. It’s as meaningless as shifting the entire universe one foot to the left is. Who is to say that the majority is wrong and a minority is correct? And if there is some objective way to say that, then we can define the bias in terms of that objective way, like if we defined it in relation to some particular thermometer that’s declared to be perfect (not unlike how some measurements were actually defined for some time).
>You are not going to reason well about a question like Are Americans Becoming More Depressed? if you treat the subject as not being about an underlying reality.
I mean, surely you see how questions like that might not be terribly meaningful until you operationalize it somehow? And as I’ve said, my theory does not differ in predictive ability, so if I’m reasoning worse in some respect but I get all the same predictions, what’s wrong?
Or is useful for communicating ideas to other people..but, hang on, how can a meaningless concept be useful for communication? That just breaks the ordinary meaning of “meaning”.
EY argues that a particular literary claim can be meaningless, and yet you can still have a test and get graded based on your knowledge of that statement. This is in the posts that I linked at the very top of my post. Do you disagree with that claims, i.e. you think those claims are actually meaningful?
That sort of thing is testable. You get a bunch of literary professors in a Septuagint scenario, where they have to independently classify books as pre or post colonial. Why would that be impossible? It’s evidently possible to perform an easier version of the test where books are classified as romance, horror or Western.
That would be evidence. EYs personal opinion is not evidence , nor is yours.
Certainly if you get a bunch of physicists in a room they will disagree about what entities are real. So according to your proposed test, physics isn’t real?
As I have explained, the argument for realism in general is not based on a particular theory being realistically true
You ignored my question.
There’s a huge difference between saying that a particular literay claim can be meaningless and saying that all of those claims are meaningless.
You can take the Sokal episode as an argument that if someone who doesn’t have the expert knowledge can easily pass as an expert then those experts don’t seem to have a lot of meaningful knowledge.
Different claims in that tradition are going to have a different status.
I’m not saying that all literary claims are meaningless. I’m saying all ontological claims are meaningless. Regardless, I’m responding to a comment which implied meaningless concepts cannot be useful for communication.