If by “ought” claims you mean things we assign truth values that aren’t derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by “ought” statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you’re talking about don’t even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
I think there’s some dangerous reasoning here around the idea of “why.” If I believe that a plate is on the table, I don’t need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it’s a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
I don’t mean either of those. I mean things that are compelling to reasoning agents. There are also non reasoning agents that don’t find these compelling. These non reasoning agents don’t make justified “is” claims.
The oughts don’t overdetermine the course of action but do place constraints on it.
If you believe the sense of a plate being there comes from your visual cortex and also that your visual cortex isn’t functioning in presenting you with accurate information, then you should reconsider your beliefs.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments? In this case, you are right by definition.
Re:visual cortex, the most important point is that knowledge of my visual cortex, “ought”-type or not, is not necessary. People believed things just fine 200 years ago. Second, I don’t like the language that my visual cortex “passes information to me.” It is a part of me. There is no little homunculus in my head getting telegraph signals from the cortices, it’s just a bunch of brain in there.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments?
Any sufficiently smart agent that makes mathematical claims about integers must be persuadable that 1+1=2, otherwise it isn’t really making mathematical claims / smart / etc. (It can lie about believing 1+1=2, of course)
That is the sense in which I mean any agent with a sufficiently rich internal justificatory structure of ‘is’ claims, which makes ‘is’ claims, accepts at least some ’ought’s. (This is the conclusion of the argument in this post, which you haven’t directly responded to)
It’s possible to use language to coordinate in abstract situations with only rudimentary logical reasoning, so that isn’t a sufficient condition.
I guess I’m just still not sure what you expect the oughts to be doing.
Is the sort of behavior your thinking of like “I ought not to be inconsistent” being one of your “oughts,” and leading to various epistemological actions to avoid inconsistency? This seems to me plausible, but it also seems to be almost entirely packed into how we usually define “rational” or “rich internal justificatory structure” or “sufficiently smart.”
One could easily construct a competent system that did not represent its own consistency, or represented it but took certain actions that systematically failed to avoid it. To which you would say “well, that’s not sufficiently reflective.” What we’d want, for this to be a good move, is for “reflective” (or “smart,” “rich structure,” “rational,” etc) to be a simple thing that predicts the “oughts” neatly. But the “oughts” you describe seem to be running on a model of world-modeling / optimization that is more complicated than strictly necessary for an optimizer, and adding slightly more complication with each ought (though not as much as is required to specify each one separately).
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
I thought at first that your post was about there being some beliefs with unusual properties, labeled “oughts,” that everyone has to have some of. But now I think you’re claiming that there is some big bundle of oughts that everyone (who is sufficiently X/Y/Z) has all of, and my response is that I’m totally unconvinced that X/Y/Z is in fact a neutral way of ranking systems we want to talk about with the language of epistemology.
I guess I’m just still not sure what you expect the oughts to be doing.
I was assuming that the point was that “oughts” and “ises” aren’t completely disjoint, as a crude understanding of the “is-ought divide” might suggest.
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
If you assume something like moral realism, so that there is some list of “oughts” that are categorical, so that they don’t relate to specific kinds of agents or specific situations, then it is likely that humans are violating most of them.
But moral realism is hard to justify.
On the other hand, given the premises that
moral norms are just one kind of norm
norms are always ways of performing a function or achieving an end
then you can come up with a constructivist metaethics that avoids the pitfalls of nihilism, relativism and realism. (I think. No idea if that is what Jesicata is saying).
If by “ought” statements you mean the universally compelling truths of moral realism,
“universally compelling” is setting the bar extremely high. To set it a bit more reasonably: there are moral facts if there is evidence or argument a rational agent would agree with.
Fair enough. But that “compelling” wasn’t so much about compelled agreement, and more about compelled action (“intrinsically motivating”, as they say). It’s impressive if all rational agents agree that murder is bad, but it doesn’t have the same oomph if this has no effect on their actions re: murder.
Compelled action is setting the bar much to high as well. We don’t expect humans to do the right thing on the basis of rational persuasion alone, we also have punishments and rewards.
If by “ought” claims you mean things we assign truth values that aren’t derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by “ought” statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you’re talking about don’t even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
There’s a third way of thinking where norms are just rules for achieving a certain kind of result optimally or at least reliably.
I think there’s some dangerous reasoning here around the idea of “why.” If I believe that a plate is on the table, I don’t need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it’s a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
Nonetheless, your visual cortex must do certain things reliably for you to be able to perceive,
If by “ought” claims you mean things we assign truth values that aren’t derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by “ought” statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you’re talking about don’t even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
I think there’s some dangerous reasoning here around the idea of “why.” If I believe that a plate is on the table, I don’t need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it’s a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
I don’t mean either of those. I mean things that are compelling to reasoning agents. There are also non reasoning agents that don’t find these compelling. These non reasoning agents don’t make justified “is” claims.
The oughts don’t overdetermine the course of action but do place constraints on it.
If you believe the sense of a plate being there comes from your visual cortex and also that your visual cortex isn’t functioning in presenting you with accurate information, then you should reconsider your beliefs.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments? In this case, you are right by definition.
Re:visual cortex, the most important point is that knowledge of my visual cortex, “ought”-type or not, is not necessary. People believed things just fine 200 years ago. Second, I don’t like the language that my visual cortex “passes information to me.” It is a part of me. There is no little homunculus in my head getting telegraph signals from the cortices, it’s just a bunch of brain in there.
Yes, but as far as I can tell you believe your percepts are generated by your visual cortex, so the argument applies to you.
Any sufficiently smart agent that makes mathematical claims about integers must be persuadable that 1+1=2, otherwise it isn’t really making mathematical claims / smart / etc. (It can lie about believing 1+1=2, of course)
That is the sense in which I mean any agent with a sufficiently rich internal justificatory structure of ‘is’ claims, which makes ‘is’ claims, accepts at least some ’ought’s. (This is the conclusion of the argument in this post, which you haven’t directly responded to)
It’s possible to use language to coordinate in abstract situations with only rudimentary logical reasoning, so that isn’t a sufficient condition.
I guess I’m just still not sure what you expect the oughts to be doing.
Is the sort of behavior your thinking of like “I ought not to be inconsistent” being one of your “oughts,” and leading to various epistemological actions to avoid inconsistency? This seems to me plausible, but it also seems to be almost entirely packed into how we usually define “rational” or “rich internal justificatory structure” or “sufficiently smart.”
One could easily construct a competent system that did not represent its own consistency, or represented it but took certain actions that systematically failed to avoid it. To which you would say “well, that’s not sufficiently reflective.” What we’d want, for this to be a good move, is for “reflective” (or “smart,” “rich structure,” “rational,” etc) to be a simple thing that predicts the “oughts” neatly. But the “oughts” you describe seem to be running on a model of world-modeling / optimization that is more complicated than strictly necessary for an optimizer, and adding slightly more complication with each ought (though not as much as is required to specify each one separately).
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
I thought at first that your post was about there being some beliefs with unusual properties, labeled “oughts,” that everyone has to have some of. But now I think you’re claiming that there is some big bundle of oughts that everyone (who is sufficiently X/Y/Z) has all of, and my response is that I’m totally unconvinced that X/Y/Z is in fact a neutral way of ranking systems we want to talk about with the language of epistemology.
I was assuming that the point was that “oughts” and “ises” aren’t completely disjoint, as a crude understanding of the “is-ought divide” might suggest.
If you assume something like moral realism, so that there is some list of “oughts” that are categorical, so that they don’t relate to specific kinds of agents or specific situations, then it is likely that humans are violating most of them.
But moral realism is hard to justify.
On the other hand, given the premises that
moral norms are just one kind of norm
norms are always ways of performing a function or achieving an end
then you can come up with a constructivist metaethics that avoids the pitfalls of nihilism, relativism and realism. (I think. No idea if that is what Jesicata is saying).
“universally compelling” is setting the bar extremely high. To set it a bit more reasonably: there are moral facts if there is evidence or argument a rational agent would agree with.
Fair enough. But that “compelling” wasn’t so much about compelled agreement, and more about compelled action (“intrinsically motivating”, as they say). It’s impressive if all rational agents agree that murder is bad, but it doesn’t have the same oomph if this has no effect on their actions re: murder.
Compelled action is setting the bar much to high as well. We don’t expect humans to do the right thing on the basis of rational persuasion alone, we also have punishments and rewards.
There’s a third way of thinking where norms are just rules for achieving a certain kind of result optimally or at least reliably.
Nonetheless, your visual cortex must do certain things reliably for you to be able to perceive,