He suggests that if you improved the neuroscience of lie detection and then implemented it in politics that the implications would be huge. Even if it weren’t 100% effective, the threat of using it would be enough to deter many. Current lie detection is based on thing like thermal and electrical readings of the skin. However, these are really inaccurate compared to a potential neuroscientific approach.
I can think of two concepts of lie detection. In the first, the statement is compared with objective truth (i.e. Omega says, “Contrary to the Senator’s assertion, this tax credit will not create jobs”). In the second, the statement is compared with the contents of the speaker’s mind (i.e. Omega says, “The Senator does not believe that this tax credit will create jobs”).
The first type of lie detection would be really awesome, but unlikely to be developed based on physiological study because (to paraphrase from X-Files) the truth is not in there.
The second type probably would not be useful in politics because politics is the mindkiller and I predict that most politicians believe the fundamentals of the principles they assert (based on motivated cognition, to some extent). That said, a truly reliable lie detector would be great in litigation. No more he said, she said issues. Of course, there is still the risk that the witness honestly believes some false facts.
There are lots of interesting questions that I would like to be able to force politicians to answer truthfully. I’m just not sure that any answer would matter to the politician’s political followers.
I still think that it would be highly beneficial. Think about Bill Clinton. Think about 9/11. Think about area 51. We could set a lot of conspiracy theories to rest.
I respectfully suggest that you are underestimating the power of motivated cognition. For example, if you believe a conspiracy thoery, then any Omega-verified denial can be explained because the speaker was not in on the truth (i.e. plausible deniability was set up in advance). Actually, I also think you overestimate the importance of fringe theories in partisan politics.
(and just to satisfy my curiosity, what Bill Clinton thing are you referring to?)
We could set a lot of conspiracy theories to rest.
Perhaps a few existing ones would be made slightly less popular. Maybe.
Say, did you hear about where the technology for the lie detectors came from? The manufacturer that’s reproducing the original artifact has ties to the political elite that secretly...
Actually, I just read that for some reason. My take was that it was way too utopian, didn’t give much thought to the endless ways people would try to circumvent it, and in retrospect its forecasts for American crime and global nuclear terrorism were hilariously wrong. (Although I am generally in favor of massively increased honesty and truth machines.) The writing was kind of wooden, but apparently it was Halperin’s first novel.
Except it won’t be regular people using lie detectors on politicians. It will be government officials and big corps using lie detectors on regular people.
Also, if/when reliable lie detection tech appears, it probably won’t take long for someone to develop a counter: a means of making oneself (or another person) truly believe a given statement. Of course, the first customers of such counter tech will also be governments and big corps.
Also, if/when such appears, it probably won’t take long for someone to develop a counter: some means of making oneself (or another person) truly believe a given statement.
This actually overstates the difficulty—you don’t need to make people truly believe the statement, although that would work. You just need to make the analogues examined mimic those of someone who does truly believe the statement. I wouldn’t be at all surprised to see an arms race, with artificial belief driven toward honest belief in the long run.
“Regular people using lie detectors on politicians” does seem impossible, but “government officials and big corps using lie detectors on regular people” (to any interesting extent) is far from clear, it’s easy to see how it could be successfully resisted by appealing to human rights intuitions, or channeled towards significantly different forms of use, escaping your description. (Even China’s regime is not certain to persist in relevant respects on this timescale.)
I admit that I stole this one from Sam Harris.
He suggests that if you improved the neuroscience of lie detection and then implemented it in politics that the implications would be huge. Even if it weren’t 100% effective, the threat of using it would be enough to deter many. Current lie detection is based on thing like thermal and electrical readings of the skin. However, these are really inaccurate compared to a potential neuroscientific approach.
I can think of two concepts of lie detection. In the first, the statement is compared with objective truth (i.e. Omega says, “Contrary to the Senator’s assertion, this tax credit will not create jobs”). In the second, the statement is compared with the contents of the speaker’s mind (i.e. Omega says, “The Senator does not believe that this tax credit will create jobs”).
The first type of lie detection would be really awesome, but unlikely to be developed based on physiological study because (to paraphrase from X-Files) the truth is not in there.
The second type probably would not be useful in politics because politics is the mindkiller and I predict that most politicians believe the fundamentals of the principles they assert (based on motivated cognition, to some extent). That said, a truly reliable lie detector would be great in litigation. No more he said, she said issues. Of course, there is still the risk that the witness honestly believes some false facts.
You could ask politicians how much they’ve researched various questions and (in detail) how thoroughly they’ve considered alternative possibilities.
There are lots of interesting questions that I would like to be able to force politicians to answer truthfully. I’m just not sure that any answer would matter to the politician’s political followers.
Actually admit to having considered other possibilities? That sounds dangerous!
I still think that it would be highly beneficial. Think about Bill Clinton. Think about 9/11. Think about area 51. We could set a lot of conspiracy theories to rest.
I respectfully suggest that you are underestimating the power of motivated cognition. For example, if you believe a conspiracy thoery, then any Omega-verified denial can be explained because the speaker was not in on the truth (i.e. plausible deniability was set up in advance). Actually, I also think you overestimate the importance of fringe theories in partisan politics.
(and just to satisfy my curiosity, what Bill Clinton thing are you referring to?)
Bill Clinton and Monica Lowinsky. Which reminds me of another application: Lie detection in relationships.
Perhaps a few existing ones would be made slightly less popular. Maybe.
Say, did you hear about where the technology for the lie detectors came from? The manufacturer that’s reproducing the original artifact has ties to the political elite that secretly...
So the old faithful “their lips are moving” isn’t sufficient any more?
Check out Halperin’s The Truth Machine.
Actually, I just read that for some reason. My take was that it was way too utopian, didn’t give much thought to the endless ways people would try to circumvent it, and in retrospect its forecasts for American crime and global nuclear terrorism were hilariously wrong. (Although I am generally in favor of massively increased honesty and truth machines.) The writing was kind of wooden, but apparently it was Halperin’s first novel.
Or maybe “the invention of lying”. Probably less philosophical though.
Except it won’t be regular people using lie detectors on politicians. It will be government officials and big corps using lie detectors on regular people.
Also, if/when reliable lie detection tech appears, it probably won’t take long for someone to develop a counter: a means of making oneself (or another person) truly believe a given statement. Of course, the first customers of such counter tech will also be governments and big corps.
This actually overstates the difficulty—you don’t need to make people truly believe the statement, although that would work. You just need to make the analogues examined mimic those of someone who does truly believe the statement. I wouldn’t be at all surprised to see an arms race, with artificial belief driven toward honest belief in the long run.
I find it implausible for your level of certainty (and/or focus on listed scenarios) to be correct.
I agree about the second part. But the first part is pretty obvious, isn’t it?
“Regular people using lie detectors on politicians” does seem impossible, but “government officials and big corps using lie detectors on regular people” (to any interesting extent) is far from clear, it’s easy to see how it could be successfully resisted by appealing to human rights intuitions, or channeled towards significantly different forms of use, escaping your description. (Even China’s regime is not certain to persist in relevant respects on this timescale.)
This seems apropos.