I don’t mean either of those. I mean things that are compelling to reasoning agents. There are also non reasoning agents that don’t find these compelling. These non reasoning agents don’t make justified “is” claims.
The oughts don’t overdetermine the course of action but do place constraints on it.
If you believe the sense of a plate being there comes from your visual cortex and also that your visual cortex isn’t functioning in presenting you with accurate information, then you should reconsider your beliefs.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments? In this case, you are right by definition.
Re:visual cortex, the most important point is that knowledge of my visual cortex, “ought”-type or not, is not necessary. People believed things just fine 200 years ago. Second, I don’t like the language that my visual cortex “passes information to me.” It is a part of me. There is no little homunculus in my head getting telegraph signals from the cortices, it’s just a bunch of brain in there.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments?
Any sufficiently smart agent that makes mathematical claims about integers must be persuadable that 1+1=2, otherwise it isn’t really making mathematical claims / smart / etc. (It can lie about believing 1+1=2, of course)
That is the sense in which I mean any agent with a sufficiently rich internal justificatory structure of ‘is’ claims, which makes ‘is’ claims, accepts at least some ’ought’s. (This is the conclusion of the argument in this post, which you haven’t directly responded to)
It’s possible to use language to coordinate in abstract situations with only rudimentary logical reasoning, so that isn’t a sufficient condition.
I guess I’m just still not sure what you expect the oughts to be doing.
Is the sort of behavior your thinking of like “I ought not to be inconsistent” being one of your “oughts,” and leading to various epistemological actions to avoid inconsistency? This seems to me plausible, but it also seems to be almost entirely packed into how we usually define “rational” or “rich internal justificatory structure” or “sufficiently smart.”
One could easily construct a competent system that did not represent its own consistency, or represented it but took certain actions that systematically failed to avoid it. To which you would say “well, that’s not sufficiently reflective.” What we’d want, for this to be a good move, is for “reflective” (or “smart,” “rich structure,” “rational,” etc) to be a simple thing that predicts the “oughts” neatly. But the “oughts” you describe seem to be running on a model of world-modeling / optimization that is more complicated than strictly necessary for an optimizer, and adding slightly more complication with each ought (though not as much as is required to specify each one separately).
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
I thought at first that your post was about there being some beliefs with unusual properties, labeled “oughts,” that everyone has to have some of. But now I think you’re claiming that there is some big bundle of oughts that everyone (who is sufficiently X/Y/Z) has all of, and my response is that I’m totally unconvinced that X/Y/Z is in fact a neutral way of ranking systems we want to talk about with the language of epistemology.
I guess I’m just still not sure what you expect the oughts to be doing.
I was assuming that the point was that “oughts” and “ises” aren’t completely disjoint, as a crude understanding of the “is-ought divide” might suggest.
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
If you assume something like moral realism, so that there is some list of “oughts” that are categorical, so that they don’t relate to specific kinds of agents or specific situations, then it is likely that humans are violating most of them.
But moral realism is hard to justify.
On the other hand, given the premises that
moral norms are just one kind of norm
norms are always ways of performing a function or achieving an end
then you can come up with a constructivist metaethics that avoids the pitfalls of nihilism, relativism and realism. (I think. No idea if that is what Jesicata is saying).
I don’t mean either of those. I mean things that are compelling to reasoning agents. There are also non reasoning agents that don’t find these compelling. These non reasoning agents don’t make justified “is” claims.
The oughts don’t overdetermine the course of action but do place constraints on it.
If you believe the sense of a plate being there comes from your visual cortex and also that your visual cortex isn’t functioning in presenting you with accurate information, then you should reconsider your beliefs.
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments? In this case, you are right by definition.
Re:visual cortex, the most important point is that knowledge of my visual cortex, “ought”-type or not, is not necessary. People believed things just fine 200 years ago. Second, I don’t like the language that my visual cortex “passes information to me.” It is a part of me. There is no little homunculus in my head getting telegraph signals from the cortices, it’s just a bunch of brain in there.
Yes, but as far as I can tell you believe your percepts are generated by your visual cortex, so the argument applies to you.
Any sufficiently smart agent that makes mathematical claims about integers must be persuadable that 1+1=2, otherwise it isn’t really making mathematical claims / smart / etc. (It can lie about believing 1+1=2, of course)
That is the sense in which I mean any agent with a sufficiently rich internal justificatory structure of ‘is’ claims, which makes ‘is’ claims, accepts at least some ’ought’s. (This is the conclusion of the argument in this post, which you haven’t directly responded to)
It’s possible to use language to coordinate in abstract situations with only rudimentary logical reasoning, so that isn’t a sufficient condition.
I guess I’m just still not sure what you expect the oughts to be doing.
Is the sort of behavior your thinking of like “I ought not to be inconsistent” being one of your “oughts,” and leading to various epistemological actions to avoid inconsistency? This seems to me plausible, but it also seems to be almost entirely packed into how we usually define “rational” or “rich internal justificatory structure” or “sufficiently smart.”
One could easily construct a competent system that did not represent its own consistency, or represented it but took certain actions that systematically failed to avoid it. To which you would say “well, that’s not sufficiently reflective.” What we’d want, for this to be a good move, is for “reflective” (or “smart,” “rich structure,” “rational,” etc) to be a simple thing that predicts the “oughts” neatly. But the “oughts” you describe seem to be running on a model of world-modeling / optimization that is more complicated than strictly necessary for an optimizer, and adding slightly more complication with each ought (though not as much as is required to specify each one separately).
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
I thought at first that your post was about there being some beliefs with unusual properties, labeled “oughts,” that everyone has to have some of. But now I think you’re claiming that there is some big bundle of oughts that everyone (who is sufficiently X/Y/Z) has all of, and my response is that I’m totally unconvinced that X/Y/Z is in fact a neutral way of ranking systems we want to talk about with the language of epistemology.
I was assuming that the point was that “oughts” and “ises” aren’t completely disjoint, as a crude understanding of the “is-ought divide” might suggest.
If you assume something like moral realism, so that there is some list of “oughts” that are categorical, so that they don’t relate to specific kinds of agents or specific situations, then it is likely that humans are violating most of them.
But moral realism is hard to justify.
On the other hand, given the premises that
moral norms are just one kind of norm
norms are always ways of performing a function or achieving an end
then you can come up with a constructivist metaethics that avoids the pitfalls of nihilism, relativism and realism. (I think. No idea if that is what Jesicata is saying).