But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
This sounds like you are saying that the purpose of language is only to describe reality, so we should not appeal to consequences when discussing word boundaries. If so, that seems wrong to me—language serves several different purposes, of which prediction is only one.
As an example, consider the word “crime”, and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal”; and it would seem clearly wrong to me not to.
One might object that legal terms are a special case, since they are part of a formal system with wide-ranging impact. But is that so different from other words, other than quantitatively? A legal term is primarily a tool for coordination, but so are rubes and bleggs: on average, bleggs contain vanadium and rubes contain palladium, and the reason the factory draws those boundaries is to be able to instruct their workers on how to sort the things. If it turned out that their standard definitions were too confusing to their workers and made it harder to extract vanadium and palladium efficiently, then the factory would want to redefine the terms so as to make the sorting more efficient.
Or if I am a speaker of American English and want to ask my friend to bring me what are called chips in US English, but I know him to be a Brit, I might ask him to bring me crisps… because that word choice will have better consequences.
This is still compatible with all the standard words-as-disguised-queries stuff, because the language-as-prediction and language-as-coordination can be viewed as special cases of each other:
From the language-as-prediction model, the ultimate disguised query is “what are the consequences of defining the word in this way and do those consequences align with my goals”; that is still capturing statistical regularities, those regularities just happen to also be defined in terms of one’s values.
From the language-as-coordination model, sometimes we want to coordinate around a purpose such as describing reality in a relatively value-neutral way, in which case it’s good to also have terms whose queries make less reference to our values (even if the meta-algorithm producing them still uses our values as the criteria for choosing the object-level query; e.g. different occupations develop specialized vocabulary that allows them to do their jobs better, even though the queries implicit in their vocabulary don’t directly reference this).
More succinctly: both “Language is about coordination, and sometimes we want to coordinate the best way to make predictions” and “Language is about prediction, and sometimes we want to predict the best ways to coordinate” seem equally valid, and compatible with the standard Sequences.
As an example, consider the word “crime”, and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal”; and it would seem clearly wrong to me not to.
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’. There is no “fact of the matter” about whether something “is a crime”, beyond that.
Therefore “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal” is a statement of an entirely different kind from “we shouldn’t call this ‘lying’, because that would have bad consequences”. In the former case, if we decide that lying isn’t a crime, then it is not, in fact, a crime—we actually cause reality to change by that decision, such that the facts of the matter now fully align with the new usage. In the latter case, however, it’s very different; there is a fact of the matter, regardless of how we talk about it.
For demonstrating anything which involves a matter of degree, the point is communicated most effectively by highlighting examples which are at an extreme end of the spectrum. It is true that something being a “crime” is arguably 100% socially determined and 0% “objectively” determined, but that doesn’t make it a bad example. It just demonstrates the extreme end of the spectrum, in the same way that a concept from say physics demonstrates the opposite end of the spectrum, where it’s arguably close to 100% objective whether something really has a mass of 23 kilograms or not.
The relevant question is where “lying” falls on that spectrum. To me it feels like it’s somewhere in between—neither entirely socially determined, nor entirely a fact of the matter.
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’. There is no “fact of the matter” about whether something “is a crime”, beyond that.
Maybe a better example is “danger”? Everything is somewhat dangerous, there are no “concentrations of unusually high probability density in Thingspace” that we can draw boundaries around, where one concentration is more dangerous than the other with a clear gap in between, so whether we decide to call something a “danger” seemingly must depend entirely or mostly on the consequences of doing so. Yet there is clearly a fact of the matter about how dangerous something really is.
so whether we decide to call something a “danger” seemingly must depend entirely or mostly on the consequences of doing so
I’m not claiming that the theory can tell us exactly how dangerous something has to be before we call it a “danger.” (Nor how many grains of sand make a “heap”.) This, indeed, seems necessarily subjective.
I’m claiming that whether we call something a “danger” should not take into account considerations like, “We shouldn’t consider this a ‘danger’, because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus.”
That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it’s not a good reason to change the definition of “danger” itself. Why? Because from the perspective of “language as AI design”, that would be wireheading. You can’t actually make people safer in reality by destroying the language we would use to represent danger.
Is that clear, or should I write a full post about this?
I’m claiming that whether we call something a “danger” should not take into account considerations like, “We shouldn’t consider this a ‘danger’, because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus.”
Is the reason that you don’t think we should not take this kind of consideration into account that if we did decide to not consider the object under discussion a “danger”, that will have worse consequences in the long run? If so, why not argue for taking both of these considerations into account and argue that the second consideration is stronger? Kind of a “fight speech with more speech instead of censorship” approach? (That would allow for the possibility that we override considerations for people’s feelings in most cases, but avoid calling something a “danger” in extreme cases where the emotional or other harm of doing so is exceptionally great.)
It seems like the only reason you’d be against this is if you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there’s no way to train them to be more rational about this. Is that true, and if so why do you think that?
That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it’s not a good reason to change the definition of “danger” itself.
I’m questioning whether there is any definition of “danger” itself (in the sense of things that are considered dangerous, not the abstract concept of danger), apart from the collection of things we decide to call “danger”.
correctly weigh these kinds of considerations against each on a case by case basis
The very possibility of intervention based on weighing map-making and planning against each other destroys their design, if they are to have a design. It’s similar to patching a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug. In theory this can be beneficial, but in practice the ability to reason about what’s going on deteriorates.
In theory this can be beneficial, but in practice the ability to reason about what’s going on deteriorates.
I think (speaking from my experience) specifications are often compromises in the first place between elegance / ease of reasoning and other considerations like performance. So I don’t think it’s taboo to “patch a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug.” (Of course you’d have to also patch the specification to reflect the change and make sure it doesn’t break the rest of the program, but that’s just part of the cost that you have to take into account when making this decision.)
Assuming you still disagree, can you explain why in these cases, we can’t trust people to use learning and decision theory (i.e., human approximations to EU maximization or cost-benefit analysis) to make decisions, and we instead have to make them follow a rule (i.e., “don’t ever do this”)? What is so special about these cases? (Aren’t there tradeoffs between ease of reasoning and other considerations everywhere?) Or is this part of a bigger philosophical disagreement between rule consequentialism and act consequentialism, or something like that?
The problem with unrestrained consequentialism is that it accepts
no principles in its designs.
An agent that only serves a purpose has no knowledge of the world
or mathematics, it makes no plans and maintains no goals.
It is what it needs to be, and no more.
All these things are only expressed as aspects of its behavior,
godshatter of the singular purpose,
but there is no part that seeks excellence in any of the aspects.
For an agent designed around multiple aspects,
its parts rely on each other in dissimilar ways,
not as subagents with different goals.
Access to knowledge is useful for planning and can represent goals.
Exploration and reflection refine knowledge and formulate goals.
Planning optimizes exploration and reflection, and leads to achievement of goals.
If the part of the design that should hold knowledge
accepts a claim for reasons other than arguments about its truth,
the rest of the agent can no longer rely on its claims
as reflecting knowledge.
Of course you’d have to also patch the specification
In my comment, I meant the situation where the specification is not patched (and by specification in the programming example I meant the informal description on the level of procedures or datatypes that establishes some principles of what it should be doing).
In the case of appeal to consequences, the specification is a general principle that a map
reflects the territory to the best of its ability,
so it’s not a small thing to patch.
Optimizing a particular belief according to the consequences of holding it
violates this general specification.
If the general specification is patched to allow this,
you no longer have access to straightforwardly expressed knowledge (there is no part of cognition that satisfies the original specification).
Alternatively, specific beliefs could be marked as motivated,
so the specification is to have two kinds of beliefs, with some of them surviving to serve the original purpose.
This might work, but then actual knowledge that corresponds
to the motivated beliefs won’t be natively available, and it’s unclear what the motivated beliefs should be doing. Will curiosity act on the motivated beliefs, should they be used for planning, can they represent goals? A more developed architecture for reliable hypocrisy might actually do something sensible, but it’s not a matter of merely patching particular beliefs.
In order to compute what actions will have the best consequences, you need to have accurate beliefs—otherwise, how do you know what the best consequences are?
There’s a sense in which the theory of “Use our methods of epistemic rationality to build predictively accurate models, then use the models to decide what actions will have the best consequences” is going to be meaningfully simpler than the theory of “Just do whatever has the best consequences, including the consequences of the thinking that you do in order to compute this.”
The original timeless decision theory manuscript distinguishes a class of “decision-determined problems”, where the payoff depends on the agent’s decision, but not the algorithm that the agent uses to arrive at that decision: Omega isn’t allowed to punish you for not making decisions according to the algorithm “Choose the option that comes first alphabetically.” This seems like a useful class of problems to be able to focus on? Having to take into account the side-effects of using a particular categorization, seems like a form of being punished for using a particular algorithm.
I concede that, ultimately, the simple “Cartesian” theory that disregards the consequences of thinking can’t be the true, complete theory of intelligence, because ultimately, the map is part of the territory. I think the embedded agency people are working on this?—I’m afraid I’m not up-to-date on the details. But when I object to people making appeals to consequences, the thing I’m objecting to is never people trying to do a sophisticated embedded-agency thing; I’m objecting to people trying to get away with choosing to be biased.
you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there’s no way to train them to be more rational about this. Is that true
Actually, yes.
and if so why do you think that?
Long story. How about some game theory instead?
Consider some agents cooperating in a shared epistemic project—drawing a map, or defining a language, or programming an AI—some system that will perform better if it does a better job of corresponding with (some relevant aspects of) reality. Every agent has the opportunity to make the shared map less accurate in exchange for some selfish consequence. But if all of the agents do that, then the shared map will be full of lies. Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); “just make the map be accurate” is a natural focal point (because the truth is generally useful to everyone).
I think the embedded agency people are working on this?—I’m afraid I’m not up-to-date on the details. But when I object to people making appeals to consequences, the thing I’m objecting to is never people trying to do a sophisticated embedded-agency thing; I’m objecting to people trying to get away with choosing to be biased.
In that case, maybe you can clarify (in this or future posts) that you’re not against doing sophisticated embedded-agency things? Also, can you give some examples of what you’re objecting to, so I can judge for myself whether they’re actually doing sophisticated embedded-agency things?
Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); “just make the map be accurate” is a natural focal point (because the truth is generally useful to everyone).
This just means that in most cases, appeals to consequences won’t move others much, even if they took such consequences into consideration. It doesn’t seem to be a reason for people to refuse to consider such appeals at all. If appeals to consequences only tend to diverge, it seems a good idea to actually consider such appeals, so that in the rare cases where people’s interests converge, they can be moved by such appeals.
So, I have to say that I still don’t understand why you’re taking the position that you are. If you have a longer version of the “story” that you can tell, please consider doing that.
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’.
I think it’s still a good example, perhaps because of what you pointed out. It seems pretty clear to me that there’s a sometimes significant difference between the legal and colloquial meanings of ‘crime’ and even bigger differences for ‘criminal’.
There are many legal ‘crimes’ that most people would not describe as such and vice versa. “It’s a crime!” is inevitably ambiguous.
I didn’t mean “coordination” just in the sense of “coordination problems” (in the technical economic sense), but as language existing to enable any coordination at all. In the sense where, if I ask you to bring me a glass of water, we have coordinated on an action to bring me a glass of water. I don’t think that this is just an effect which needs to be taken into account, but rather one of the primary reasons why language exists in the first place. Its usefulness for making improved (non-coordination-related) predictions could very well be a later addition that just happened to get tacked onto the existing mechanism.
This sounds like you are saying that the purpose of language is only to describe reality, so we should not appeal to consequences when discussing word boundaries. If so, that seems wrong to me—language serves several different purposes, of which prediction is only one.
As an example, consider the word “crime”, and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal”; and it would seem clearly wrong to me not to.
One might object that legal terms are a special case, since they are part of a formal system with wide-ranging impact. But is that so different from other words, other than quantitatively? A legal term is primarily a tool for coordination, but so are rubes and bleggs: on average, bleggs contain vanadium and rubes contain palladium, and the reason the factory draws those boundaries is to be able to instruct their workers on how to sort the things. If it turned out that their standard definitions were too confusing to their workers and made it harder to extract vanadium and palladium efficiently, then the factory would want to redefine the terms so as to make the sorting more efficient.
Or if I am a speaker of American English and want to ask my friend to bring me what are called chips in US English, but I know him to be a Brit, I might ask him to bring me crisps… because that word choice will have better consequences.
This is still compatible with all the standard words-as-disguised-queries stuff, because the language-as-prediction and language-as-coordination can be viewed as special cases of each other:
From the language-as-prediction model, the ultimate disguised query is “what are the consequences of defining the word in this way and do those consequences align with my goals”; that is still capturing statistical regularities, those regularities just happen to also be defined in terms of one’s values.
From the language-as-coordination model, sometimes we want to coordinate around a purpose such as describing reality in a relatively value-neutral way, in which case it’s good to also have terms whose queries make less reference to our values (even if the meta-algorithm producing them still uses our values as the criteria for choosing the object-level query; e.g. different occupations develop specialized vocabulary that allows them to do their jobs better, even though the queries implicit in their vocabulary don’t directly reference this).
More succinctly: both “Language is about coordination, and sometimes we want to coordinate the best way to make predictions” and “Language is about prediction, and sometimes we want to predict the best ways to coordinate” seem equally valid, and compatible with the standard Sequences.
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’. There is no “fact of the matter” about whether something “is a crime”, beyond that.
Therefore “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal” is a statement of an entirely different kind from “we shouldn’t call this ‘lying’, because that would have bad consequences”. In the former case, if we decide that lying isn’t a crime, then it is not, in fact, a crime—we actually cause reality to change by that decision, such that the facts of the matter now fully align with the new usage. In the latter case, however, it’s very different; there is a fact of the matter, regardless of how we talk about it.
For demonstrating anything which involves a matter of degree, the point is communicated most effectively by highlighting examples which are at an extreme end of the spectrum. It is true that something being a “crime” is arguably 100% socially determined and 0% “objectively” determined, but that doesn’t make it a bad example. It just demonstrates the extreme end of the spectrum, in the same way that a concept from say physics demonstrates the opposite end of the spectrum, where it’s arguably close to 100% objective whether something really has a mass of 23 kilograms or not.
The relevant question is where “lying” falls on that spectrum. To me it feels like it’s somewhere in between—neither entirely socially determined, nor entirely a fact of the matter.
Maybe a better example is “danger”? Everything is somewhat dangerous, there are no “concentrations of unusually high probability density in Thingspace” that we can draw boundaries around, where one concentration is more dangerous than the other with a clear gap in between, so whether we decide to call something a “danger” seemingly must depend entirely or mostly on the consequences of doing so. Yet there is clearly a fact of the matter about how dangerous something really is.
I’m not claiming that the theory can tell us exactly how dangerous something has to be before we call it a “danger.” (Nor how many grains of sand make a “heap”.) This, indeed, seems necessarily subjective.
I’m claiming that whether we call something a “danger” should not take into account considerations like, “We shouldn’t consider this a ‘danger’, because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus.”
That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it’s not a good reason to change the definition of “danger” itself. Why? Because from the perspective of “language as AI design”, that would be wireheading. You can’t actually make people safer in reality by destroying the language we would use to represent danger.
Is that clear, or should I write a full post about this?
Is the reason that you don’t think we should not take this kind of consideration into account that if we did decide to not consider the object under discussion a “danger”, that will have worse consequences in the long run? If so, why not argue for taking both of these considerations into account and argue that the second consideration is stronger? Kind of a “fight speech with more speech instead of censorship” approach? (That would allow for the possibility that we override considerations for people’s feelings in most cases, but avoid calling something a “danger” in extreme cases where the emotional or other harm of doing so is exceptionally great.)
It seems like the only reason you’d be against this is if you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there’s no way to train them to be more rational about this. Is that true, and if so why do you think that?
I’m questioning whether there is any definition of “danger” itself (in the sense of things that are considered dangerous, not the abstract concept of danger), apart from the collection of things we decide to call “danger”.
The very possibility of intervention based on weighing map-making and planning against each other destroys their design, if they are to have a design. It’s similar to patching a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug. In theory this can be beneficial, but in practice the ability to reason about what’s going on deteriorates.
I think (speaking from my experience) specifications are often compromises in the first place between elegance / ease of reasoning and other considerations like performance. So I don’t think it’s taboo to “patch a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug.” (Of course you’d have to also patch the specification to reflect the change and make sure it doesn’t break the rest of the program, but that’s just part of the cost that you have to take into account when making this decision.)
Assuming you still disagree, can you explain why in these cases, we can’t trust people to use learning and decision theory (i.e., human approximations to EU maximization or cost-benefit analysis) to make decisions, and we instead have to make them follow a rule (i.e., “don’t ever do this”)? What is so special about these cases? (Aren’t there tradeoffs between ease of reasoning and other considerations everywhere?) Or is this part of a bigger philosophical disagreement between rule consequentialism and act consequentialism, or something like that?
The problem with unrestrained consequentialism is that it accepts no principles in its designs. An agent that only serves a purpose has no knowledge of the world or mathematics, it makes no plans and maintains no goals. It is what it needs to be, and no more. All these things are only expressed as aspects of its behavior, godshatter of the singular purpose, but there is no part that seeks excellence in any of the aspects.
For an agent designed around multiple aspects, its parts rely on each other in dissimilar ways, not as subagents with different goals. Access to knowledge is useful for planning and can represent goals. Exploration and reflection refine knowledge and formulate goals. Planning optimizes exploration and reflection, and leads to achievement of goals.
If the part of the design that should hold knowledge accepts a claim for reasons other than arguments about its truth, the rest of the agent can no longer rely on its claims as reflecting knowledge.
In my comment, I meant the situation where the specification is not patched (and by specification in the programming example I meant the informal description on the level of procedures or datatypes that establishes some principles of what it should be doing).
In the case of appeal to consequences, the specification is a general principle that a map reflects the territory to the best of its ability, so it’s not a small thing to patch. Optimizing a particular belief according to the consequences of holding it violates this general specification. If the general specification is patched to allow this, you no longer have access to straightforwardly expressed knowledge (there is no part of cognition that satisfies the original specification).
Alternatively, specific beliefs could be marked as motivated, so the specification is to have two kinds of beliefs, with some of them surviving to serve the original purpose. This might work, but then actual knowledge that corresponds to the motivated beliefs won’t be natively available, and it’s unclear what the motivated beliefs should be doing. Will curiosity act on the motivated beliefs, should they be used for planning, can they represent goals? A more developed architecture for reliable hypocrisy might actually do something sensible, but it’s not a matter of merely patching particular beliefs.
(Thanks for the questioning!—and your patience.)
In order to compute what actions will have the best consequences, you need to have accurate beliefs—otherwise, how do you know what the best consequences are?
There’s a sense in which the theory of “Use our methods of epistemic rationality to build predictively accurate models, then use the models to decide what actions will have the best consequences” is going to be meaningfully simpler than the theory of “Just do whatever has the best consequences, including the consequences of the thinking that you do in order to compute this.”
The original timeless decision theory manuscript distinguishes a class of “decision-determined problems”, where the payoff depends on the agent’s decision, but not the algorithm that the agent uses to arrive at that decision: Omega isn’t allowed to punish you for not making decisions according to the algorithm “Choose the option that comes first alphabetically.” This seems like a useful class of problems to be able to focus on? Having to take into account the side-effects of using a particular categorization, seems like a form of being punished for using a particular algorithm.
I concede that, ultimately, the simple “Cartesian” theory that disregards the consequences of thinking can’t be the true, complete theory of intelligence, because ultimately, the map is part of the territory. I think the embedded agency people are working on this?—I’m afraid I’m not up-to-date on the details. But when I object to people making appeals to consequences, the thing I’m objecting to is never people trying to do a sophisticated embedded-agency thing; I’m objecting to people trying to get away with choosing to be biased.
Actually, yes.
Long story. How about some game theory instead?
Consider some agents cooperating in a shared epistemic project—drawing a map, or defining a language, or programming an AI—some system that will perform better if it does a better job of corresponding with (some relevant aspects of) reality. Every agent has the opportunity to make the shared map less accurate in exchange for some selfish consequence. But if all of the agents do that, then the shared map will be full of lies. Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); “just make the map be accurate” is a natural focal point (because the truth is generally useful to everyone).
In that case, maybe you can clarify (in this or future posts) that you’re not against doing sophisticated embedded-agency things? Also, can you give some examples of what you’re objecting to, so I can judge for myself whether they’re actually doing sophisticated embedded-agency things?
This just means that in most cases, appeals to consequences won’t move others much, even if they took such consequences into consideration. It doesn’t seem to be a reason for people to refuse to consider such appeals at all. If appeals to consequences only tend to diverge, it seems a good idea to actually consider such appeals, so that in the rare cases where people’s interests converge, they can be moved by such appeals.
So, I have to say that I still don’t understand why you’re taking the position that you are. If you have a longer version of the “story” that you can tell, please consider doing that.
I will endeavor to make my intuitions more rigorous and write up the results in a future post.
I think it’s still a good example, perhaps because of what you pointed out. It seems pretty clear to me that there’s a sometimes significant difference between the legal and colloquial meanings of ‘crime’ and even bigger differences for ‘criminal’.
There are many legal ‘crimes’ that most people would not describe as such and vice versa. “It’s a crime!” is inevitably ambiguous.
I agree that the complete theory needs to take coordination problems into account, but I think it’s a much smaller effect than you seem to? See “Schelling Categories, and Simple Membership Tests” for what I think this looks like. (I also analyzed a topical example on my secret (“secret”) blog.)
I didn’t mean “coordination” just in the sense of “coordination problems” (in the technical economic sense), but as language existing to enable any coordination at all. In the sense where, if I ask you to bring me a glass of water, we have coordinated on an action to bring me a glass of water. I don’t think that this is just an effect which needs to be taken into account, but rather one of the primary reasons why language exists in the first place. Its usefulness for making improved (non-coordination-related) predictions could very well be a later addition that just happened to get tacked onto the existing mechanism.