So, is he defending one of these positions, or arguing against them all. Or saying the whole debate is pointless?
From what I read he seems to be suggesting that truth is independent of what we believe, but I’m not sure what else he is saying, or what his argument is.
The only way you can be sure your mental map accurately represents reality is by allowing a reality-controlled process to draw your mental map.
A sheep-activated pebble-tosser is a reality-controlled process that makes accurate bucket numbers.
The human eye is a reality-controlled process that makes accurate visual cortex images.
Natural human patterns of thought like essentialism and magical thinking are NOT reality-controlled processes and they don’t draw accurate mental maps.
Each part of your mental map is called a “belief”. The parts of your mental map that portray reality accurately are called “true beliefs”.
Q: How do you know there is such a thing as “reality”, and your mental map isn’t all there is?
A: Because sometimes your mental map leads you to make confident predictions, and they still get violated, and the prediction-violating thingy deserves its own name: reality.
The only way you can be sure your mental map accurately represents reality is by allowing a reality-controlled process to draw your mental map.
Everything is reality, so that is a distinction that doesn’t make a difference. All illusions and errors are produced by real processes. (Or is “reality” being used to mean “external reality”).
The human eye is a reality-controlled process that makes accurate visual cortex images.
Sometimes. But being reality controlled isn’t a good criterion for when, since it is never false.
Natural human patterns of thought like essentialism and magical thinking are NOT reality-controlled processes and they don’t draw accurate mental maps.
They are perfomed by real brains. if “reality controlled” just means producing the right results, the whole argument is circular.
Each part of your mental map is called a “belief”. The parts of your mental map that portray reality accurately are called “true beliefs”.
Why is it important to taboot the words “accurate”, “correct”, “represent”, “reflect”, “semantic”, “believe”, “knowledge”, “map”, or “real”., but not the word “correspond”?
By “reality-controlled”, I don’t just mean “external reality”, I mean the part of external reality that your belief claims to be about.
Understanding truth in terms of “correspondance” brings me noticeably closer to coding up an intelligent reasoner from scratch than those other words.
The simple truth is that brains are like maps, and true-ness of beliefs about reality is analogous to accuracy of maps about territory. This sounds super obvious, which is why Eliezer called it “The Simple Truth”. But it runs counter to a lot of bad philosophical thinking, which is why Eliezer bothered writing it.
Understanding truth in terms of “correspondance” brings me noticeably closer to coding up an intelligent reasoner from scratch than those other words.
If the correspendence theory cannot handle maths or morals, you will end up with a reasoner that cannot handle maths or morals.
The simple truth is that brains are like maps, and true-ness of beliefs about reality is analogous to accuracy of maps about territory.
You need to show that that simple theory also deals with the hard cases....because EY didn’t.
But it runs counter to a lot of bad philosophical thinking, which is why Eliezer bothered writing it.
It’s a piece of bad thinking that runs counter to philosophy. You don’t show that something works in all cases by pointing out, however loudly or exasperatedly, that it works in the easy cases ,where it is already well known to work.
Seems like first you objected that TST’s lesson is meaningless, and now you’re objecting that it’s meaningful but limited and wrong. Worth noting that this isn’t a back and forth argument about the same objection.
The rest of LW’s epistemology sequences and meta-morality sequences explain why the foundations in TST also help understand math and morals.
I think you can somewhat rescue the correspondence theory for math by combining claims like “this math is true” with claims like “this part of reality is well modeled by this math” to create factual claims. That approach should be enough for decision making. And you can mostly rescue the correspondence theory for morals by translating claims like “X is better than Y” into factual claims like “my algorithm prefers X to Y”, since we have some idea of how algorithms might have preferences (to the extent they approximate vNM or something). I agree that both areas have unsolved mysteries, though.
I think you can somewhat rescue the correspondence theory for math by combining claims like “this math is true” with claims like “this part of reality is well modeled by this math” to create factual claims
Then you would have some other form of truth in place before you started considering what true maths corresponds to. Which may be what you mean by somewhat rescue...embed correspondence as a one component of a complex theory. But no one is really saying that correspodence is 100% wrong, the debate is more about whether a simple theory covers all cases..
And you can mostly rescue the correspondence theory for morals by translating claims like “X is better than Y” into factual claims like “my algorithm prefers X to Y”, s
Why should I go to jail for going against your preferences… why not the other way round? Getting some sort
of naturalised “should” or “ought” out of preferences is the easy bit. What you need, to solve morality, to handle specifically moral oughts, is a way to resolve conflicts between the preferences of individuals.
I don’t see what your central point is. Is it “The lessons that the author is attempting to teach in the simple truth are not positive contributions to add to one’s philosophy?”
Without having your own central point, it’s easy to just argue against each of my statements individually because they all have caveats.
So, is he defending one of these positions, or arguing against them all. Or saying the whole debate is pointless?
From what I read he seems to be suggesting that truth is independent of what we believe, but I’m not sure what else he is saying, or what his argument is.
Here are the main points I understood:
The only way you can be sure your mental map accurately represents reality is by allowing a reality-controlled process to draw your mental map.
A sheep-activated pebble-tosser is a reality-controlled process that makes accurate bucket numbers.
The human eye is a reality-controlled process that makes accurate visual cortex images.
Natural human patterns of thought like essentialism and magical thinking are NOT reality-controlled processes and they don’t draw accurate mental maps.
Each part of your mental map is called a “belief”. The parts of your mental map that portray reality accurately are called “true beliefs”.
Q: How do you know there is such a thing as “reality”, and your mental map isn’t all there is? A: Because sometimes your mental map leads you to make confident predictions, and they still get violated, and the prediction-violating thingy deserves its own name: reality.
Thanks, that helps a lot.
Everything is reality, so that is a distinction that doesn’t make a difference. All illusions and errors are produced by real processes. (Or is “reality” being used to mean “external reality”).
Sometimes. But being reality controlled isn’t a good criterion for when, since it is never false.
They are perfomed by real brains. if “reality controlled” just means producing the right results, the whole argument is circular.
Why is it important to taboot the words “accurate”, “correct”, “represent”, “reflect”, “semantic”, “believe”, “knowledge”, “map”, or “real”., but not the word “correspond”?
By “reality-controlled”, I don’t just mean “external reality”, I mean the part of external reality that your belief claims to be about.
Understanding truth in terms of “correspondance” brings me noticeably closer to coding up an intelligent reasoner from scratch than those other words.
The simple truth is that brains are like maps, and true-ness of beliefs about reality is analogous to accuracy of maps about territory. This sounds super obvious, which is why Eliezer called it “The Simple Truth”. But it runs counter to a lot of bad philosophical thinking, which is why Eliezer bothered writing it.
If the correspendence theory cannot handle maths or morals, you will end up with a reasoner that cannot handle maths or morals.
You need to show that that simple theory also deals with the hard cases....because EY didn’t.
It’s a piece of bad thinking that runs counter to philosophy. You don’t show that something works in all cases by pointing out, however loudly or exasperatedly, that it works in the easy cases ,where it is already well known to work.
Seems like first you objected that TST’s lesson is meaningless, and now you’re objecting that it’s meaningful but limited and wrong. Worth noting that this isn’t a back and forth argument about the same objection.
The rest of LW’s epistemology sequences and meta-morality sequences explain why the foundations in TST also help understand math and morals.
I’ve read them, and, no, not really.
I think you can somewhat rescue the correspondence theory for math by combining claims like “this math is true” with claims like “this part of reality is well modeled by this math” to create factual claims. That approach should be enough for decision making. And you can mostly rescue the correspondence theory for morals by translating claims like “X is better than Y” into factual claims like “my algorithm prefers X to Y”, since we have some idea of how algorithms might have preferences (to the extent they approximate vNM or something). I agree that both areas have unsolved mysteries, though.
Then you would have some other form of truth in place before you started considering what true maths corresponds to. Which may be what you mean by somewhat rescue...embed correspondence as a one component of a complex theory. But no one is really saying that correspodence is 100% wrong, the debate is more about whether a simple theory covers all cases..
Why should I go to jail for going against your preferences… why not the other way round? Getting some sort of naturalised “should” or “ought” out of preferences is the easy bit. What you need, to solve morality, to handle specifically moral oughts, is a way to resolve conflicts between the preferences of individuals.
I don’t see what your central point is. Is it “The lessons that the author is attempting to teach in the simple truth are not positive contributions to add to one’s philosophy?”
Without having your own central point, it’s easy to just argue against each of my statements individually because they all have caveats.