It’s actually not just about lie detection, because the technology starts to shade over into outright mind reading.
But even simple lie detection is an example of a class of technology that needs to be totally banned, yesterday[1]. In or out of court and with or without “consent”[2]. The better it works, the more reliable it is, the more it needs to be banned.
If you cannot lie, and you cannot stay silent without adverse inferences being drawn, then you cannot have any secrets at all. The chance that you could stay silent, in nearly any important situation, would be almost nil.
If even lie detection became widely available and socially acceptable, then I’d expect many, many people’s personal relationships to devolve into constant interrogation about undesired actions and thoughts. Refusing such interrogation would be treated as “having something to hide” and would result in immediate termination of the relationship. Oh, and secret sins that would otherwise cause no real trouble would blow up people’s lives.
At work, you could expect to be checked for a “positive, loyal attitude toward the company” on as frequent a basis as was administratively convenient. It would not be enough that you were doing a good job, hadn’t done anything actually wrong, and expected to keep it that way. You’d be ranked straight up on your Love for the Company (and probably on your agreement with management, and very possibly on how your political views comported with business interests). The bottom N percent would be “managed out”.
Heck, let’s just have everybody drop in at the police station once a month and be checked for whether they’ve broken any laws. To keep it fair, we will of course have to apply all laws (including the stupid ones) literally and universally.
On a broader societal level, humans are inherently prone to witch hunts and purity spirals, whether the power involved is centralized or decentralized. An infallible way to unmask the “witches” of the week would lead to untold misery.
Other than wishful thinking, there’s actually no reason to believe that people in any of the above contexts would lighten up about anything if they discovered it was common. People have an enormous capacity to reject others for perceived sins.
This stuff risks turning personal and public life into utter hell.
“Consent” is a slippery concept, because there’s always argument about what sorts of incentives invalidate it. The bottom line, if this stuff became widespread, would be that anybody who “opted out” would be pervasively disadvantaged to the point of being unable to function.
Yes, this is why I put “decentralized” in the title even though it doesn’t really fit. What I was going for with the post is that you read it yourself, except whenever the author writes about law, you think for yourself about stacking the various applications that you care about (not courts) with the complex caveats that the author was writing about (while they were thinking about courts). Ideally I would have distilled it as the paper is a bit long.
This credibly demonstrates that the world we live in is more flexible than it might appear. And on the macro-civilizational scale, this particular tech looks like it will place honest souls higher-up on net, which everyone prefers. People can establish norms of remaining silent on particular matters, although the process of establishing those norms will be stacked towards people who can honestly say “I think this makes things better for everyone”, “I think this is a purity spiral” and away from those who can’t.
At work, you could expect to be checked for a “positive, loyal attitude toward the company” on as frequent a basis as was administratively convenient. It would not be enough that you were doing a good job, hadn’t done anything actually wrong, and expected to keep it that way. You’d be ranked straight up on your Love for the Company (and probably on your agreement with management, and very possibly on how your political views comported with business interests). The bottom N percent would be “managed out”.
It’s actually not just about lie detection, because the technology starts to shade over into outright mind reading.
But even simple lie detection is an example of a class of technology that needs to be totally banned, yesterday[1]. In or out of court and with or without “consent”[2]. The better it works, the more reliable it is, the more it needs to be banned.
If you cannot lie, and you cannot stay silent without adverse inferences being drawn, then you cannot have any secrets at all. The chance that you could stay silent, in nearly any important situation, would be almost nil.
If even lie detection became widely available and socially acceptable, then I’d expect many, many people’s personal relationships to devolve into constant interrogation about undesired actions and thoughts. Refusing such interrogation would be treated as “having something to hide” and would result in immediate termination of the relationship. Oh, and secret sins that would otherwise cause no real trouble would blow up people’s lives.
At work, you could expect to be checked for a “positive, loyal attitude toward the company” on as frequent a basis as was administratively convenient. It would not be enough that you were doing a good job, hadn’t done anything actually wrong, and expected to keep it that way. You’d be ranked straight up on your Love for the Company (and probably on your agreement with management, and very possibly on how your political views comported with business interests). The bottom N percent would be “managed out”.
Heck, let’s just have everybody drop in at the police station once a month and be checked for whether they’ve broken any laws. To keep it fair, we will of course have to apply all laws (including the stupid ones) literally and universally.
On a broader societal level, humans are inherently prone to witch hunts and purity spirals, whether the power involved is centralized or decentralized. An infallible way to unmask the “witches” of the week would lead to untold misery.
Other than wishful thinking, there’s actually no reason to believe that people in any of the above contexts would lighten up about anything if they discovered it was common. People have an enormous capacity to reject others for perceived sins.
This stuff risks turning personal and public life into utter hell.
You might need to make some exceptions for medical use on truly locked-in patients. The safeguards would have to be extreme, though.
“Consent” is a slippery concept, because there’s always argument about what sorts of incentives invalidate it. The bottom line, if this stuff became widespread, would be that anybody who “opted out” would be pervasively disadvantaged to the point of being unable to function.
Yes, this is why I put “decentralized” in the title even though it doesn’t really fit. What I was going for with the post is that you read it yourself, except whenever the author writes about law, you think for yourself about stacking the various applications that you care about (not courts) with the complex caveats that the author was writing about (while they were thinking about courts). Ideally I would have distilled it as the paper is a bit long.
This credibly demonstrates that the world we live in is more flexible than it might appear. And on the macro-civilizational scale, this particular tech looks like it will place honest souls higher-up on net, which everyone prefers. People can establish norms of remaining silent on particular matters, although the process of establishing those norms will be stacked towards people who can honestly say “I think this makes things better for everyone”, “I think this is a purity spiral” and away from those who can’t.
This is probably already happening.