I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.
Okay, we’re getting nearer to understanding each other, thanks. :)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
Elsewhere, you said:
The problem is that the word “ought” has multiple definitions. You are observing that all the other definitions of ought are physically reducible. That puts them on the “is” side. But now there is a gap between hypothetical-ought-statements and categorical-ought-statements, and it’s just the same size as before. You can reduce the word “ought” in the following sentence: “If ‘ought’ means ‘popcorn’, then I am eating ought right now.” It doesn’t help.
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
But now you’re talking about something different than the is-ought gap. You’re talking about a gap between “hypothetical-ought-statements and categorical-ought-statements.” Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
I genuinely appreciate you sticking this out with me. I know it’s taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
I don’t think any exist, so I could not do so.
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
Hypothetical-ought statements are a certain kind of statement about the physical world. They’re the kind that contain the word “ought”, but they’re just an arbitrary subset of the “is”-statements.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
But no fact can alone imply anything (in this sense), it’s not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.
No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I’m awake. To get the implication, we need to add a definition (or at least some rule) of “awake” in neuronal terms. And this definition will not capture the meaning of “awake.” We could ask, “given that a brain is , is it awake?” and intuition will tell us that it is an open question.
But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of “awake”. It only needs to get the reference correct.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Great question. It seems to me that normative ethics involves reducing the term “moral” without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce “moral”) for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem.
This is not a problem in the same sense as astronomical waste that will occur during the rest of this year is not a problem: it’s not possible to do something about it.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
A formal logical definition often won’t capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won’t infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.
It’s important to get a definition of morality in a similar sense and for similar reasons: it won’t capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.
Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it’s better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman’s argument in this line, or my own.
Another alternative is to build an upload-based FAI instead, like Stuart Armstrong’s recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer’s FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I’m asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)
Of course you’re probably just as unsure of these issues as I am, but I’m curious what your current thoughts are.
Humans are also incomplete in this sense. We already have no way of capturing the whole problem statement. The goal is to capture it as well as possible using some reflective trick of looking at our own brains or behavior, which is probably way better than what an upload singleton that doesn’t build a FAI is capable of.
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution. There should probably be a charity dedicated to designing upload-based singletons as a kind of high-impact applied normative ethics effort (and SIAI might want to spawn one, since rational thinking about morality is important for this task; we don’t want fatalistic acceptance of a possible Malthusian dystopia or unchecked moral drift), but this is not the same problem as FAI.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is. Assuming that we don’t solve meta-philosophy first, any de novo AIs we build will be more incomplete than humans. Do you agree?
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution.
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
There is a time pressure from existential risk (also, astronomical waste). Just as in FAI vs. AGI race, we would have a race between FAI-building and AGI-building uploads (in the sense of “who runs first”, but also literally while restricted by speed and costs). And fast-running uploads pose other risks as well, for example they could form an unfriendly singleton without even solving AGI, or build runaway nanotech.
(Planning to make sure that we run a prepared upload FAI team before a singleton of any other nature can prevent it is an important contingency, someone should get on that in the coming decades, and better metaethical theory and rationality education can help in that task.)
I should have made myself clearer. What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads (since that’s what “singleton” means). It will be facing significantly less time pressure than a similar organization trying to build FAI directly. (Delay will still cause astronomical waste due to physical resources falling away into event horizons and the like, but that seems negligible compared to the existential risks that we face now.)
What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads.
But this assumption is rather unlikely/difficult to implement, so in the situation where we count on it, we’ve already lost a large portion of the future. Also, this course of action (unlikely to succeed as it is in any case) significantly benefits from massive funding to buy computational resources, which is a race. The other alternative, which is educating people in a way that increases the chances of a positive upload-driven outcome, is also a race, for development of better understanding of metaethics/rationality and for educating more people better.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is.
Philosophical progress is just a special kind of physical action that we can perform, valuable for abstract reasons that feed into what constitutes our values. I don’t see how this feature is fundamentally different from pointing to any other complicated aspect of human values and saying that AI must be able to make that distinction or destroy all value with its mining claws. Of course it must.
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Agreed.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
I believe “I ought to do X” if and only if I support preference-ordering X.
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statemen
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
We can only make physical observations, not moral observations.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Can you deduce physical facts from new moral facts?
Can you deduce physical facts from new moral facts? >
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t
just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
Okay, we’re getting nearer to understanding each other, thanks. :)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you’re coming from.
Elsewhere, you said:
Goodness, no. I’m not arguing that all translations of ‘ought’ are equally useful as long as they successfully refer!
But now you’re talking about something different than the is-ought gap. You’re talking about a gap between “hypothetical-ought-statements and categorical-ought-statements.” Could you describe the gap, please? ‘Categorical ought’ in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
I genuinely appreciate you sticking this out with me. I know it’s taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.
I don’t think any exist, so I could not do so.
I’m saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Hypothetical-ought statements are a certain kind of statement about the physical world. They’re the kind that contain the word “ought”, but they’re just an arbitrary subset of the “is”-statements.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
(Physical facts can inform you about what the right preference order is, if you expect that they are related to the moral facts.)
perhaps the right thing to say is “No fact can alone imply a preference order.”
But no fact can alone imply anything (in this sense), it’s not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.
No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I’m awake. To get the implication, we need to add a definition (or at least some rule) of “awake” in neuronal terms. And this definition will not capture the meaning of “awake.” We could ask, “given that a brain is , is it awake?” and intuition will tell us that it is an open question.
But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of “awake”. It only needs to get the reference correct.
Reduction doesn’t typically involve capturing the meaning of the reduced terms—Is the (meta)ethical case special? If so, why and how?
Great question. It seems to me that normative ethics involves reducing the term “moral” without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce “moral”) for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don’t see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.
This is not a problem in the same sense as astronomical waste that will occur during the rest of this year is not a problem: it’s not possible to do something about it.
(I agree with your comment.)
A formal logical definition often won’t capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won’t infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.
It’s important to get a definition of morality in a similar sense and for similar reasons: it won’t capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.
Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it’s better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman’s argument in this line, or my own.
Another alternative is to build an upload-based FAI instead, like Stuart Armstrong’s recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer’s FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I’m asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)
Of course you’re probably just as unsure of these issues as I am, but I’m curious what your current thoughts are.
Humans are also incomplete in this sense. We already have no way of capturing the whole problem statement. The goal is to capture it as well as possible using some reflective trick of looking at our own brains or behavior, which is probably way better than what an upload singleton that doesn’t build a FAI is capable of.
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn’t get us any closer to the solution. There should probably be a charity dedicated to designing upload-based singletons as a kind of high-impact applied normative ethics effort (and SIAI might want to spawn one, since rational thinking about morality is important for this task; we don’t want fatalistic acceptance of a possible Malthusian dystopia or unchecked moral drift), but this is not the same problem as FAI.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is. Assuming that we don’t solve meta-philosophy first, any de novo AIs we build will be more incomplete than humans. Do you agree?
It gets closer to the solution in the sense that there is no longer a time pressure, since it’s easier for an upload-singleton to ensure their own value stability, and they don’t have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
There is a time pressure from existential risk (also, astronomical waste). Just as in FAI vs. AGI race, we would have a race between FAI-building and AGI-building uploads (in the sense of “who runs first”, but also literally while restricted by speed and costs). And fast-running uploads pose other risks as well, for example they could form an unfriendly singleton without even solving AGI, or build runaway nanotech.
(Planning to make sure that we run a prepared upload FAI team before a singleton of any other nature can prevent it is an important contingency, someone should get on that in the coming decades, and better metaethical theory and rationality education can help in that task.)
I should have made myself clearer. What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won’t be facing competition from other uploads (since that’s what “singleton” means). It will be facing significantly less time pressure than a similar organization trying to build FAI directly. (Delay will still cause astronomical waste due to physical resources falling away into event horizons and the like, but that seems negligible compared to the existential risks that we face now.)
But this assumption is rather unlikely/difficult to implement, so in the situation where we count on it, we’ve already lost a large portion of the future. Also, this course of action (unlikely to succeed as it is in any case) significantly benefits from massive funding to buy computational resources, which is a race. The other alternative, which is educating people in a way that increases the chances of a positive upload-driven outcome, is also a race, for development of better understanding of metaethics/rationality and for educating more people better.
Philosophical progress is just a special kind of physical action that we can perform, valuable for abstract reasons that feed into what constitutes our values. I don’t see how this feature is fundamentally different from pointing to any other complicated aspect of human values and saying that AI must be able to make that distinction or destroy all value with its mining claws. Of course it must.
Agreed, however, it is somewhat useful in pointing out a specific, common, type of bad argument.
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Agreed.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in “I support preference-ordering X”, as opposed to a statement about support as in “preference-ordering X is ‘good’ if ‘good’ is defined as ‘maximizes Y’”?
What do you mean by ‘preference order’ such that no fact can imply a preference order? I’m thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
Because a positive (’is”) statement + a normative (“ought) statement is enough information to determine an action, and once actions are determined you don’t need further information.
“information” may not be the right word.
I believe “I ought to do X” if and only if I support preference-ordering X.
I’m thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe “this preference order is correct” if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe “this fact is true” if and only if there is an encoding in my brain of this fact.
I’ve continued our dialogue here.
What if it’s encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication “28” on display iff the fact is true? Or, say, I know that my computer contains a copy of “Understand” by Ted Chiang, even though I don’t remember its complete text. Finally, some parts of my brain don’t know what other parts of my brain know. The brain doesn’t hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Well if I see the screen then there’s an encoding of “28” in my brain. Not of the reason why 28 is true, but at least that the answer is “28″.
You believe that “the computer contains a copy of Understand”, not “the computer contains a book with the following text: [text of Understand]”.
Obviously, on the level of detail in which the notion of “belief” starts breaking down, the notion of “belief” starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Yet you might not know the question. “28” only certifies that the question makes a true statement.
Exactly. You don’t know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you’ll know that it’s the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of “Understand” and nothing else, even though you don’t know what the text of “Understand” is.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What’s “just a statement”?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
So you say. Many would say that you need the argument (proof, justification, evidence) for a true belief for it to qualify as knowledge.
Obviously, this doesn’t prevent me from saying that I know something without an argument.
You can say that you are the Queen of Sheba.
It remains the case that knowledge is not lucky guessing, so an argument, evidence or some other justification is required.
Yes, but this is completely and totally irrelevant to the point I was making, that:
I will profess that a statement, X, is true, if and only if “X” is encoded in a certain manner in my brain.
Yet “X is true” does not mean “X is encoded in this manner in my brain.”
Been really busy, will respond to this in about a week. I want to read your earlier discussion post, first, too.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don’t have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn’t fundamentally distinguish the moral facts from the facts about the world, so it’s unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
My ideas work unless it’s impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
Is it? If it’s impossible, why?
It’s the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn’t draw this boundary.
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
To the extent we haven’t defined what “moral observations” are exactly, so that the possibility isn’t ruled out in a clear sense, I’d say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
That is, by deducing mathematical facts from new physical facts.
Can you deduce physical facts from new moral facts?
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Why not?
Because your only sources of new facts are your senses.
You can’t infer new (to you) facts from information you already have? You can’t just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
I guess you could but you couldn’t be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one’s senses, and then infers things from the physical fact that one is told.
I’m definitely not saying this right. The larger point I’m trying to make is that it makes sense to consider an agent’s physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
Where does it say that? One needs good information, but the sense can err, and hearsay can be reliable.
The sense are of course involved in acquiring second hand information, but there is still a categoreal difference between showing and telling.
In order to achieve what?
Simplicity, maybe?
A simple way of doing what?
Answering questions like “What are true beliefs? What is knowledge? How does science work?′
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
Well, that’s one of the things you can’t do whilst ignoring moral beliefs.
All the same comprehension of the state of the world, including how beliefs about “true morals” remain accessible. They are simply considered to be physical facts about the construction of certain agents.
That’s an answer to the question “how do you deduce moral beliefs from physical facts”,not the question in hand: “how do you deduce moral beliefs from physical beliefs”.
Physical beliefs are constructed from physical facts. Just like everything else!
But the context of the discussion was what can be inferred from physical beliefs.
Also your thoughts, your reasoning, which is machinery for perceiving abstract facts, including moral facts.
How might one deduce new physical facts from new moral facts produced by abstract reasoning?
You can predict that (physical) human babies won’t be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
You can make those arguments in an entirely physical fashion. You don’t need the morality.
You do need the mathematical abstraction to bundle and unbundle physical facts.
You can use calculators without knowing abstract math too, but it makes sense to talk of mathematical facts independent of calculators.
But it also makes sense to talk about calculators without abstract math.
That’s all I’m saying.
I agree. But it’s probably not all that you’re saying, since this possibility doesn’t reveal problems with inferring physical facts from moral facts.
There is a mapping from physical+moral belief structures to just-physical belief structures.
Correct physical-moral deductions map to correct physical deductions.
The end physical beliefs are purely explained by the beginning physical beliefs + new physical observations.
Meaning what? Are you saying you can get oughts form ises?
No, I’m saying you can distinguish oughts from ises.
I am saying that you can move from is to is to is and never touch upon oughts.
That you can solve all is-problems while ignoring oughts.