Do you have any evidence at all that this sort of use of webcams is happening or has happened…?
Currently no. My argument focuses on the incentives for tech companies or intelligence agencies to acquire this data illicitly, in addition to existing legal app permissions that people opt into. My argument makes a solid case that these incentives are very strong; however, hacking people’s webcams at large scale is risky, even if you get large amounts of data from smarter elites and better targets that way, and select targets based on low risk of detecting the traffic or the spyware. My argument is that the risk is more than sufficient to justify covering up webcams; I demonstrate that leaving webcams uncovered is actually the extreme action.
Also, can you say more about what you mean by “finding information that makes you uncomfortable because it is supposed to be secret, by comparing it to labelled past instances of people’s facial microreactions to reading information that was established to be secret” and “millions of hours of facial microexpression data in response to various pieces of content”? You are suggesting that photos are being taken constantly, or… video is being recorded, and also activity data is being recorded about, like… what webpages are being browsed? Is this being uploaded continuously, or…? Like, in a technical sense, what does this look like?
Yes. A hypothetical example is the NSA trying to identify FSB employees who are secretly cheating on their spouses. The NSA steals face and eyetracking data on 1 million Russians while they are scrolling through twitter on their phones, and manages to use other sources to confirm 50 men who are cheating on their spouses and trying very hard to hide it. The phones record video files and spyware on the system simplifies them to facial models before encrypting and sending the data to the NSA. The NSA has some computer vision people identify trends that distinguish all 50 of the cheating men but are otherwise rare; as it turns out, each of them exhibit a unique facial tic when exposed to the concept of poking holes in condoms. They test it on men from the million, and it turns out that trend wasn’t sufficiently helpful at identifying cheaters. They find another trend, the religious men of the 50 scroll slightly faster when a religion-focused influencer talks specifically about the difference between heaven and hell. When the influencer talks about the difference, rather than just heaven or hell, they exhibit a facial tic that turns out to strongly distinguish cheaters from non-cheaters among the million men they stole data from. While it is disappointing that they will only be able to use this technique to identify cheating FSB employees if they are religious and use social media platforms that the NSA can place that specific concept into, it’s actually a pretty big win compared to the 500 other things their systems discovered that year. And possibly I’m describing this process as much less automated and streamlined than it would be in reality.
For steering people’s thinking in measurable directions, the only non-automated process is figuring out how to measure/label successes and failures.
Currently no. My argument focuses on the incentives for tech companies or intelligence agencies to acquire this data illicitly, in addition to existing legal app permissions that people opt into. My argument makes a solid case that these incentives are very strong; however, hacking people’s webcams at large scale is risky, even if you get large amounts of data from smarter elites and better targets that way, and select targets based on low risk of detecting the traffic or the spyware. My argument is that the risk is more than sufficient to justify covering up webcams; I demonstrate that leaving webcams uncovered is actually the extreme action.
Yes. A hypothetical example is the NSA trying to identify FSB employees who are secretly cheating on their spouses. The NSA steals face and eyetracking data on 1 million Russians while they are scrolling through twitter on their phones, and manages to use other sources to confirm 50 men who are cheating on their spouses and trying very hard to hide it. The phones record video files and spyware on the system simplifies them to facial models before encrypting and sending the data to the NSA. The NSA has some computer vision people identify trends that distinguish all 50 of the cheating men but are otherwise rare; as it turns out, each of them exhibit a unique facial tic when exposed to the concept of poking holes in condoms. They test it on men from the million, and it turns out that trend wasn’t sufficiently helpful at identifying cheaters. They find another trend, the religious men of the 50 scroll slightly faster when a religion-focused influencer talks specifically about the difference between heaven and hell. When the influencer talks about the difference, rather than just heaven or hell, they exhibit a facial tic that turns out to strongly distinguish cheaters from non-cheaters among the million men they stole data from. While it is disappointing that they will only be able to use this technique to identify cheating FSB employees if they are religious and use social media platforms that the NSA can place that specific concept into, it’s actually a pretty big win compared to the 500 other things their systems discovered that year. And possibly I’m describing this process as much less automated and streamlined than it would be in reality.
For steering people’s thinking in measurable directions, the only non-automated process is figuring out how to measure/label successes and failures.