First, working against people who stole is an unalloyed good. It eliminates the deadweight loss of the thieves not valuing the goods as much as the customers would have, the reduced investment & profit from theft (which directly affects honest employees’ employment as it turns hiring into more of a lemon market), and redistributes money towards the honest employees who do not help themselves to a five-finger discount. Reducing theft is ceteris paribus a good thing.
This method is simply socially wrong—it works against both the people who stole, and people who had something stolen from, who get penalized for the honest answer and
This is nowhere close to being an argument that this is a bad thing because it hurts the honest people, because you have not shown that the harm is disproportionate: the honest people are already being harmed. Even if you had any sort of evidence for that, which of course you don’t, this would hardly be a ‘perfect’ example.
Second, these methods—particularly the Bayesian truth serum—work only over groups, so no individual can be reliably identified as a liar or truth-teller in real-world contexts since one expects (in the limit) all answers to be represented at some frequency. This leads to some inherent limits to how much truth can be extracted. (Example: how do you know if someone is a habitual thief, or just watches a lot of cynical shows like The Wire?)
I’m not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.
I’m not sure what you think my disagreement should have focused on—the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons—as long as most of us refrain from using this sort of approach, it can work for the few that do not understand the common good or do not care. Hence the Dutch dike example. This method is a social wrong, not necessarily a personal wrong—it may work well in general circumstances. Furthermore it is, in certain sense that is normally quite obvious, unfair. (edit: It is nonetheless widely used, of course—even with all the social mechanisms in human psyche, the human brain is a predictive system that will use any correlations it encounters—and people do often signal their pretend non-understanding -I suspect this silly game significantly discriminates against Aspergers spectrum disorders).
edit: For a more conspicuous example of how predictive methods are not in general socially acceptable, consider that if I am to train a predictor of criminality on the profile data complete with a photograph, the skin albedo estimate from the photograph will be a significant part of the predictor assuming that the data processed originates in north America. Matter of fact I have some experience with predictive software of the kind that processes interview answers. Let me assure you, my best guess from briefly reading your profile is that you are not in the category of people who benefit from this software that just uses all correlations it finds—pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.
However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons
I think the heart of the disagreement is that your beliefs about communication are wildly divergent from my beliefs about communication. You mention communication as some sort of innately good thing that can be corrupted by speakers.
I’m of the school of thought that all language is about deception and signaling. Our current language is part of a never ending arms race between lies, status relationships, and the need to get some useful information across. The whole idea that you could have a tragedy of the commons seems odd to me. A new lying or lie detection method is invented, we use it until people learn the signal or how to signal around it, then we adopt some other clever technique. It’s been this way since Ogg first learned he could trick his opponent by saying “Ogg no smash!”
If you hold that language is sacred, then this technique is bad. If you hold that language is a perpetual arms race of deception and counter-deception, this is just another technique that will be used until it’s no longer useful.
I do not see why are you even interested in asking that sort of question if you have such a view—surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process—you have yourself already sent the wrong signal that you believe language to be all deception and signaling.
Well, there’s two different questions here… the first is what is in fact true about human communication, and the second is what’s right and wrong.
I might believe, as Xachariah does, that language is fundamentally a mechanism for manipulating the behavior of others rather than for making true statements about the world… and still endorse using language to make true statements about the world, and reject using language to manipulate the behavior of others.
Indeed, if I were in that state, I might even find it useful to assert that language is fundamentally a mechanism for making true statements about the world, if I believed that doing so would cause others to use it that way, even though such an assertion would (by my hypothetical view) be a violation of moral principles I endorse (since it would be using language to manipulate the behavior of others).
Arms races waste utility. If you defect in the Prisoner’s Dilemma, then no matter what your opponent does, the sum of your and your opponent’s utilities will be lower than if you’d cooperated. (For example, if the payoffs are (1,1) (3,0) (0,3) (2,2), then the sum goes either from 4 to 3, or from 3 to 2.) You can view cooperators as those who create value, though not necessarily for themselves, and defectors as those who destroy value, though not for themselves. So it might make sense to consider the commons sacred, and scold those who abuse it.
Promoting defection also makes sense in situations where being seen to promote defection rather than cooperation earns me status within the community (e.g., it seems cool, or seems clever, or seems contrarian, or what-have-you), and I believe that promoting defection does not significantly affect utility otherwise (e.g., I don’t believe that anyone I care about might ever be in a prisoner’s dilemma where the results actually depend in any way on the stuff I promote now).
A fundamental technique of this arms race is honestly consciously believing in your lies. So given the OP’s proposal, which incentives conscious lying, people feel uncomfortable.
The thing that can be corrupted is the general level of trust in conversation—if you’re not lying and you’re reasonably certain the other person also isn’t lying (consciously), you can relax (consciously), and that feels good. So we don’t like proposals for new social techniques or conventions where to get ahead (be hired) we would have to lie more than we think we do now.
pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.
Thinking about this, I’m curious about the last part.
Naively, it seems to me that if I’m being evaluated by a system and i know that the system penalizes respondents who have non standard interests and do not answer questions in the standard ways, but I don’t have access to the model being used, then if I want to improve my score what I ought to do is pretend to have standard interests and to answer questions in standard ways (even when such standard answers don’t reflect my actual thoughts about the question).
I might or might not be able to do that, depending on how good my own model of standard behavior is, but it doesn’t seem that I would need to know about the model being used by the evaluators.
First, working against people who stole is an unalloyed good. It eliminates the deadweight loss of the thieves not valuing the goods as much as the customers would have, the reduced investment & profit from theft (which directly affects honest employees’ employment as it turns hiring into more of a lemon market), and redistributes money towards the honest employees who do not help themselves to a five-finger discount. Reducing theft is ceteris paribus a good thing.
This is nowhere close to being an argument that this is a bad thing because it hurts the honest people, because you have not shown that the harm is disproportionate: the honest people are already being harmed. Even if you had any sort of evidence for that, which of course you don’t, this would hardly be a ‘perfect’ example.
Second, these methods—particularly the Bayesian truth serum—work only over groups, so no individual can be reliably identified as a liar or truth-teller in real-world contexts since one expects (in the limit) all answers to be represented at some frequency. This leads to some inherent limits to how much truth can be extracted. (Example: how do you know if someone is a habitual thief, or just watches a lot of cynical shows like The Wire?)
I’m not quite sure what is the essence of your disagreement, or what relation the honest people already being harmed has with the argument I made.
I’m not sure what you think my disagreement should have focused on—the technique outlined in the article can be effective and is used in practice, and there is no point to be made that it is bad for the persons employing it; I can not make an argument convincing an antisocial person not to use this technique. However, this technique depletes the common good that is the normal human communication; it is an example of tragedy of the commons—as long as most of us refrain from using this sort of approach, it can work for the few that do not understand the common good or do not care. Hence the Dutch dike example. This method is a social wrong, not necessarily a personal wrong—it may work well in general circumstances. Furthermore it is, in certain sense that is normally quite obvious, unfair. (edit: It is nonetheless widely used, of course—even with all the social mechanisms in human psyche, the human brain is a predictive system that will use any correlations it encounters—and people do often signal their pretend non-understanding -I suspect this silly game significantly discriminates against Aspergers spectrum disorders).
edit: For a more conspicuous example of how predictive methods are not in general socially acceptable, consider that if I am to train a predictor of criminality on the profile data complete with a photograph, the skin albedo estimate from the photograph will be a significant part of the predictor assuming that the data processed originates in north America. Matter of fact I have some experience with predictive software of the kind that processes interview answers. Let me assure you, my best guess from briefly reading your profile is that you are not in the category of people who benefit from this software that just uses all correlations it finds—pretty much all people that have non standard interests and do not answer questions in the standard ways are penalized, and I do not think it would be easy to fake the answers beneficially without access to the model being used.
I think the heart of the disagreement is that your beliefs about communication are wildly divergent from my beliefs about communication. You mention communication as some sort of innately good thing that can be corrupted by speakers.
I’m of the school of thought that all language is about deception and signaling. Our current language is part of a never ending arms race between lies, status relationships, and the need to get some useful information across. The whole idea that you could have a tragedy of the commons seems odd to me. A new lying or lie detection method is invented, we use it until people learn the signal or how to signal around it, then we adopt some other clever technique. It’s been this way since Ogg first learned he could trick his opponent by saying “Ogg no smash!”
If you hold that language is sacred, then this technique is bad. If you hold that language is a perpetual arms race of deception and counter-deception, this is just another technique that will be used until it’s no longer useful.
I do not see why are you even interested in asking that sort of question if you have such a view—surely under such view you will steal if you are sure you will get away with it, just as you would e.g. try to manipulate and lie. edit: I.e. the premise seems incoherent to me. You need honesty to exist for your method to be of any value; and you need honesty not to exist for your method to be harmless and neutral. If the language is all signaling, the most your method will do is weed you out at the selection process—you have yourself already sent the wrong signal that you believe language to be all deception and signaling.
Well, there’s two different questions here… the first is what is in fact true about human communication, and the second is what’s right and wrong.
I might believe, as Xachariah does, that language is fundamentally a mechanism for manipulating the behavior of others rather than for making true statements about the world… and still endorse using language to make true statements about the world, and reject using language to manipulate the behavior of others.
Indeed, if I were in that state, I might even find it useful to assert that language is fundamentally a mechanism for making true statements about the world, if I believed that doing so would cause others to use it that way, even though such an assertion would (by my hypothetical view) be a violation of moral principles I endorse (since it would be using language to manipulate the behavior of others).
Arms races waste utility. If you defect in the Prisoner’s Dilemma, then no matter what your opponent does, the sum of your and your opponent’s utilities will be lower than if you’d cooperated. (For example, if the payoffs are (1,1) (3,0) (0,3) (2,2), then the sum goes either from 4 to 3, or from 3 to 2.) You can view cooperators as those who create value, though not necessarily for themselves, and defectors as those who destroy value, though not for themselves. So it might make sense to consider the commons sacred, and scold those who abuse it.
Promoting defection also makes sense in situations where being seen to promote defection rather than cooperation earns me status within the community (e.g., it seems cool, or seems clever, or seems contrarian, or what-have-you), and I believe that promoting defection does not significantly affect utility otherwise (e.g., I don’t believe that anyone I care about might ever be in a prisoner’s dilemma where the results actually depend in any way on the stuff I promote now).
A fundamental technique of this arms race is honestly consciously believing in your lies. So given the OP’s proposal, which incentives conscious lying, people feel uncomfortable.
The thing that can be corrupted is the general level of trust in conversation—if you’re not lying and you’re reasonably certain the other person also isn’t lying (consciously), you can relax (consciously), and that feels good. So we don’t like proposals for new social techniques or conventions where to get ahead (be hired) we would have to lie more than we think we do now.
Thinking about this, I’m curious about the last part.
Naively, it seems to me that if I’m being evaluated by a system and i know that the system penalizes respondents who have non standard interests and do not answer questions in the standard ways, but I don’t have access to the model being used, then if I want to improve my score what I ought to do is pretend to have standard interests and to answer questions in standard ways (even when such standard answers don’t reflect my actual thoughts about the question).
I might or might not be able to do that, depending on how good my own model of standard behavior is, but it doesn’t seem that I would need to know about the model being used by the evaluators.
What am I missing?
Reflexive knowledge of the standard answers themselves.