Lie detection technology is going mainstream.ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven’t.
I think it’s possible to beat such lie detectors by considering the question in such a way that you get the answer you want. “Did you kill that man?” “No” (mental framing: the knife killed him/he killed himself by annoying me/I’m a different person today/My name is not “you” so it’s technically false, etc)
Lie detection technology must be open sourced. It could fix literally everything. Just ask people “how much do you want to fix literally everything”, “how much did you think about ways to do better and avoid risk”, “do you have the skills for this position or think you can get them” etc, so many profoundly incredible things are downstream of finding and empowering the people who give good answers.
It’s AI-based, so my guess is that it uses a lot of somewhat superficial correlates that could be gamed. I expect that if it went mainstream it would be Goodharted.
I expect Goodhart would hit particularly bad if you were doing the kind of usage I guess you are implying, which is searching for a few very well selected people. A selective search is a strong optimization, and so Goodharts more.
More concrete example I have in mind, that maybe applies right now to the technology: there are people who are good at lying to themselves.
That’s not really the kind of usage I was thinking of; I was thinking of screening out low-honesty candidates from a pool who already qualified to join a high-trust system (which currently do not exist for any high-stakes matter). Large amounts of sensor (particularly from people lying and telling the truth during different kinds of interviews) will probably be necessary, but will need to focus on specific indicators of lying e.g. discomfort or heart rate changes or activity in certain parts of the brain, and extremely low false positive and false negative rages probably won’t be feasible.
Also, hopefully people would naturally set up multiple different tests for redundancy, each of which would have to be goodharted separately, and each false positive (case of a uniquely bad person being revealed as bad after passing the screening) would be added to the training data. Periodically re-testing people for the concealed emergence of low-trust tendencies would further facilitate this. Sadly, whenever a person slips through the cracks and lies and discovers they got away with it, they will know that they got away with it and continue doing it.
I’m not sure I can go into detail, but the 97% true positive (i.e. lie) detection rate cited on the website is accurate. More important, people who can administer polygraphs or know how they work can defeat polygraphs. These tests are apparently much more difficult to cheat, at least for now & while they’re proprietary.
Lie detection technology is going mainstream. ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven’t.
I think it’s possible to beat such lie detectors by considering the question in such a way that you get the answer you want. “Did you kill that man?” “No” (mental framing: the knife killed him/he killed himself by annoying me/I’m a different person today/My name is not “you” so it’s technically false, etc)
I would bet that the hesitation caused by doing the mental reframe would be picked up by this.
The counter to this is, always take your time whether you need to or not.
Lie detection technology must be open sourced. It could fix literally everything. Just ask people “how much do you want to fix literally everything”, “how much did you think about ways to do better and avoid risk”, “do you have the skills for this position or think you can get them” etc, so many profoundly incredible things are downstream of finding and empowering the people who give good answers.
It’s AI-based, so my guess is that it uses a lot of somewhat superficial correlates that could be gamed. I expect that if it went mainstream it would be Goodharted.
I expect Goodhart would hit particularly bad if you were doing the kind of usage I guess you are implying, which is searching for a few very well selected people. A selective search is a strong optimization, and so Goodharts more.
More concrete example I have in mind, that maybe applies right now to the technology: there are people who are good at lying to themselves.
That’s not really the kind of usage I was thinking of; I was thinking of screening out low-honesty candidates from a pool who already qualified to join a high-trust system (which currently do not exist for any high-stakes matter). Large amounts of sensor (particularly from people lying and telling the truth during different kinds of interviews) will probably be necessary, but will need to focus on specific indicators of lying e.g. discomfort or heart rate changes or activity in certain parts of the brain, and extremely low false positive and false negative rages probably won’t be feasible.
Also, hopefully people would naturally set up multiple different tests for redundancy, each of which would have to be goodharted separately, and each false positive (case of a uniquely bad person being revealed as bad after passing the screening) would be added to the training data. Periodically re-testing people for the concealed emergence of low-trust tendencies would further facilitate this. Sadly, whenever a person slips through the cracks and lies and discovers they got away with it, they will know that they got away with it and continue doing it.
Do you have any source for the technology being an improvement in accuracy over polygraphs?
I’m not sure I can go into detail, but the 97% true positive (i.e. lie) detection rate cited on the website is accurate. More important, people who can administer polygraphs or know how they work can defeat polygraphs. These tests are apparently much more difficult to cheat, at least for now & while they’re proprietary.