Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
I believe “I ought to do X” if and only if I support preference-ordering X.
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statemen
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
We can only make physical observations, not moral observations.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Can you deduce physical facts from new moral facts?
Can you deduce physical facts from new moral facts? >
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t
just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
I believe “I ought to do X” if and only if I support preference-ordering X.
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I’ve continued our dialogue here.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
So you say. Many would say that you need the argument (proof, justification, evidence) for a true belief for it to qualify as knowledge.
Obviously, this doesn’t prevent me from saying that I know something without an argument.
You can say that you are the Queen of Sheba.
It remains the case that knowledge is not lucky guessing, so an argument, evidence or some other justification is required.
Yes, but this is completely and totally irrelevant to the point I was making, that:
I will profess that a statement, X, is true, if and only if “X” is encoded in a certain manner in my brain.
Yet “X is true” does not mean “X is encoded in this manner in my brain.”
Been really busy, will respond to this in about a week. I want to read your earlier discussion post, first, too.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
Is it? If it’s impossible, why?
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Can you deduce physical facts from new moral facts?
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
Where does it say that? One needs good information, but the sense can err, and hearsay can be reliable.
The sense are of course involved in acquiring second hand information, but there is still a categoreal difference between showing and telling.
In order to achieve what?
Simplicity, maybe?
A simple way of doing what?
Answering questions like “What are true beliefs? What is knowledge? How does science work?′
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
Well, that’s one of the things you can’t do whilst ignoring moral beliefs.
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
But the context of the discussion was what can be inferred from physical beliefs.
Also your thoughts, your reasoning, which is machinery for perceiving abstract facts, including moral facts.
How might one deduce new physical facts from new moral facts produced by abstract reasoning?
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
You can make those arguments in an entirely physical fashion. You don’t need the morality.
You do need the mathematical abstraction to bundle and unbundle physical facts.
You can use calculators without knowing abstract math too, but it makes sense to talk of mathematical facts independent of calculators.
But it also makes sense to talk about calculators without abstract math.
That’s all I’m saying.
I agree. But it’s probably not all that you’re saying, since this possibility doesn’t reveal problems with inferring physical facts from moral facts.
There is a mapping from physical+moral belief structures to just-physical belief structures.
Correct physical-moral deductions map to correct physical deductions.
The end physical beliefs are purely explained by the beginning physical beliefs + new physical observations.
Meaning what? Are you saying you can get oughts form ises?
No, I’m saying you can distinguish oughts from ises.
I am saying that you can move from is to is to is and never touch upon oughts.
That you can solve all is-problems while ignoring oughts.