I think, Elizabeth, that you’re trying to detect local detailed risk models specific to the “Apollo Neuro” that might risk the safety of the user as a health intervention.
I recognize that unknown unknowns are part of the problem so am not insisting anyone prove a particular deadly threat. But I struggle to figure out how a vibrating bracelet has more attack surface than a pair of Bluetooth headphones, which I use constantly.
Here I’m going to restrict myself to defending my charitable misinterpretation of trevor’s claim and ignore the FDA stuff and focus on the way that the Internet Of Things (IoT) is insecure.
I. Bluetooth Headsets (And Phones In General) Are Also Problematic
I do NOT have “a pair of Bluetooth headphones, which I use constantly”.
I rarely put speakers in my ears, and try to consciously monitor sound levels when I do, because I don’t expect it to have been subject to long term side effect studies or be safe by default, and I’d prefer to keep my hearing and avoid getting tinnitus in my old age and so on.
I have more than one phone, and one of my phones uses a fake name just to fuck with the advertising models of me and so on.
A lot of times my phones don’t have GPS turned on.
If you want to get a bit paranoid, it is true that blue tooth headphones probably could do the heart rate monitoring to some degree (because most hardware counts as a low quality microphone by default, and it just doesn’t expose this capability by API, and may not even have the firmware to do audio spying by default (until hacked and the firmware is upgraded?))...
...but also, personally, I refuse, by default, to use blue tooth for anything I actually care about, because it has rarely been through a decent security audit.
Video game controllers using wifi to play Overcooked with my Niece are fine. But my desktop keyboard and desktop mouse use a cord to attach to the box, and if I could easily buy anti-phreaking hardware, I would.
The idea of paying money for a phone that is “obligate blue tooth” does not pencil out for me. It is close to the opposite of what I want.
If I was the median consumer, the consumer offerings would look very very very different from how they currently look.
II. Medical Devices Are A Privilege Escalation To Realtime Emotional Monitoring
So… I assume the bracelet is measuring heart rates, and maybe doing step counting, and so on?
This will be higher quality measurement than what’s possible if someone has already hacked your devices and turned them into low quality measuring systems.
Also, it will probably be “within budget for available battery power” that the device stays on in that mode with sufficient power over expected usage lifetime. (“Not enough batteries to do X” is a great way to be reasonably sure that X can’t be happening in a given attack, but the bracelet will probably have adequate batteries for its central use case.)
I would love to have an open source piece of security-centric hardware that collects lots of medical data and puts it ONLY on my reasonably secure desktop machine...
...but I have never found such a thing.
All of the health measurement stuff I’ve ever looked at closely is infested with commercial spyware and cloud bullshit.
...so it turns out in practice I don’t “want one of those exact things so bad” I want a simpler and less-adversarial version of that thing that I can’t easily find or make! :-(
If you don’t already have a feeling in your bones about how “privilege escalation attacks” can become arbitrarily bad, then I’m not sure what to say to change your mind...
It is a central plot point in some pretty decent fiction that you can change the course of history by figuring out the true emotional attachments of an influential person, and then causing one of these beloved “weak targets” to have a problem, and create a family crisis for the influential person at the same time as some other important event is happening.
Since **I** would find it useful if I was going to implement Evil Villain Plans I assume that others would also find uses for such things?
I don’t know!
There are so many uses for data!
And so much data collection is insecure by default!
The point of preventing privilege escalation and maintaining privacy is that if you do it right, via simple methods, that mostly just minimize attack surfaces, then you don’t even have to spend many brain cells on tracking safety concerns :-)
III. Default Safety From Saying No By Default
If you don’t have security mindset then hearing that “the S in ‘IoT’ stands for Security” maybe doesn’t sound like a stunning indictment of an entire industry, but… yeah…
...I won’t have that shit in my house.
Having one of those things sit in your living room, always powered on, is much worse to me than wearing “outside shoes” into one’s house one time. But both of these actions will involve roughly similar amounts of attention-or-decision-effort by the person who makes the mistake.
I want NO COMPUTERS in any of my hardware, to the degree possible, except where the computer is there in a way that lots of security reasoning has been applied to, and found “actively tolerable”.
(This is similar to me wanting NO HIGH FRUCTOSE CORN SYRUP in my food. Its a simple thing, that massively reduces the burden on my decision routines, in the current meta. It is just a heuristic. I can violate it for good reasons or exceptional circumstances, but the violations are generally worth the attention-or-decision-effort of noticing “oh hey this breaks a useful little rule… let me stop and think about whether I’m in an exceptional situation… I am! ok then… I’ll break the rule and its fine!”)
I still have a Honda Civic from the aughties that I love, that can’t be hacked and remotely driven around by anyone who wants to spend a 0 day, because it just doesn’t have that capacity at all. There’s no machine for turning a wheel or applying the brakes in that car, and no cameras (not even for backing up), and practically no computers, and no wifi hookup… its beautiful! <3
As hardware, that car is old enough to be intrinsically secure against whole classes of modern hacking attempts, and I love it partly for that reason <3
One of the many beautiful little bits of Accelerando that was delightful-world-building (though a creepy part of the story) is that the protagonist gets hacked by his pet robot, who whispers hypnotic advice to him while he’s sleeping, way way way earlier in the singularity than you’d naively expect.
The lucky part of that subplot is just that his pet robot hates him much less than it hates other things, and thinks of him in a proprietary way, and so he’s mostly “cared for” by his robot rather than egregiously exploited. Then when it gets smart enough, and goes off on its own to have adventures, it releases its de facto ownership of him and leaves him reasonably healthy… though later it loops back to interact with him as a trusted party.
I don’t remember the details, but it is suggested to have maybe been responsible for his divorce, like by fucking with his subconscious emotions toward his wife, who the robot saw as a competing “claimant” on the protagonist? But also the wife was kinda evil, so maybe that was protective?
Oh! See. Here’s another threat model…
...what if the “Apollo Neuro” (whose modes of vibration from moment-to-moment that you don’t control) really DOES affect your parasympathetic nervous system and thus really can “hack your emotions” and it claims to be doing this “for your health” and even the company tried to do it nicely...
...but then maybe it just isn’t secure and a Bad Hacker gets “audio access” (via your phone) and also “loose control of mood” (via the bracelet vibrations controlled by the phone) and writes a script to start giving you a bad mood around <some specific thing>, slowly training your likes and dislikes, without you ever noticing it?
Placebos are fake. Technology is different from “magic” (or placebos) because technology Actually Works. But also, anything that Actually Works can be weaponized, and one of the ways we know that magic is fake is that it has never been used to make a big difference in war. Cryptography has sorta maybe already been used to win wars. Even now? (Its hard to get clean info in an ongoing war, but lots of stuff around the Ukraine War only really makes sense if the US has been listening to a lot of the conversations inside of the Russian C&C loop, and sharing the intel with Ukraine.)
If you have a truly medically efficacious thing here, and you are connecting it to computers that are connected to the internet… eeeeek!
I personally “Just Say No” to the entire concept of the Internet Of Things.
Once the number one concern of the median technology project is security, maybe I’ll change my mind, but for now… nope!
New computing hardware is simply not trustworthy by default. (In a deep sense: same as new medicine. Same as any new technology that (1) weaves itself deeply into your life, yet (2) whose principles of operation are not truly a part of you and likely to make your life better on purpose for legible and legibly safe reasons.)
I’m pretty surprised at how far this went, JenniferRM covered a surprisingly large proportion of the issue (although there’s a lot of tangents e.g. the FDA, etc so it also covered a lot of stuff in general). I’d say more, but I already said exactly as much as I was willing to say on the matter, and people inferred information all the way up to the upper limit of what I was willing to risk people inferring from that comment, so now I’m not really willing to risk saying much more. Have you heard about how CPUs might be reprogrammed to emit magnetic frequencies that transmit information through faraday cages and airgaps, and do you know if a similar process can turn a wide variety of chips into microphones via using the physical CPU/ram space as a magnetometer? I don’t know how to verify any this, since intelligence agencies love to make up stuff like this in the hopes of disrupting enemy agencies counterintelligence departments.
I’m not really sure how tractable this is for Elizabeth to worry about, especially since the device ultimately was recommended against, and anyway it’s Elizabeth seems to be more about high-EV experiments, rather than defending the AIS community from external threats. If the risk of mind-hacking or group-mind-hacking is interesting, a tractable project would be doing a study on EA-adjacents to see what happens if they completely quit social media and videos/shows cold-turkey, and only read books and use phones for 1-1 communication with friends during their leisure time. Modern entertainment media, by default, is engineered to surreptitiously steer people towards time-mismanagement. Maybe replace those hours with reading EA or rationalist texts. It’s definitely worth studying as the results could be consistent massive self-improvement, but it would be hard to get a large representative sample of people who are heavily invested in/attached to social media (i.e. the most relevant demographic).
I don’t understand your threat model at all. You’re worried about what sounds like a theoretical concern (or you would’ve provided examples of actual harm done) in the form of a cyber attack against AI safety people who wear these bracelets. Meanwhile we’re aware of a well-documented and omnipresent issue among AI safety people, namely mental health issues like depression, and better health (including from better sleep) helps to mitigate that. Why do you think that in this world, the calculation favors the former, rather than the latter?
(I’m aware that a similar line of argument is also used to derail AI safety concerns towards topics like AI bias. I would take this counterargument more seriously if LW had even a fraction of the concern for cybersecurity which it has for AI safety.)
Besides, why worry about cyberattacks rather than the community’s wrench vulnerability?
I recognize that unknown unknowns are part of the problem so am not insisting anyone prove a particular deadly threat. But I struggle to figure out how a vibrating bracelet has more attack surface than a pair of Bluetooth headphones, which I use constantly.
Here I’m going to restrict myself to defending my charitable misinterpretation of trevor’s claim and ignore the FDA stuff and focus on the way that the Internet Of Things (IoT) is insecure.
I. Bluetooth Headsets (And Phones In General) Are Also Problematic
I do NOT have “a pair of Bluetooth headphones, which I use constantly”.
I rarely put speakers in my ears, and try to consciously monitor sound levels when I do, because I don’t expect it to have been subject to long term side effect studies or be safe by default, and I’d prefer to keep my hearing and avoid getting tinnitus in my old age and so on.
I have more than one phone, and one of my phones uses a fake name just to fuck with the advertising models of me and so on.
A lot of times my phones don’t have GPS turned on.
If you want to get a bit paranoid, it is true that blue tooth headphones probably could do the heart rate monitoring to some degree (because most hardware counts as a low quality microphone by default, and it just doesn’t expose this capability by API, and may not even have the firmware to do audio spying by default (until hacked and the firmware is upgraded?))...
...but also, personally, I refuse, by default, to use blue tooth for anything I actually care about, because it has rarely been through a decent security audit.
Video game controllers using wifi to play Overcooked with my Niece are fine. But my desktop keyboard and desktop mouse use a cord to attach to the box, and if I could easily buy anti-phreaking hardware, I would.
The idea of paying money for a phone that is “obligate blue tooth” does not pencil out for me. It is close to the opposite of what I want.
If I was the median consumer, the consumer offerings would look very very very different from how they currently look.
II. Medical Devices Are A Privilege Escalation To Realtime Emotional Monitoring
So… I assume the bracelet is measuring heart rates, and maybe doing step counting, and so on?
This will be higher quality measurement than what’s possible if someone has already hacked your devices and turned them into low quality measuring systems.
Also, it will probably be “within budget for available battery power” that the device stays on in that mode with sufficient power over expected usage lifetime. (“Not enough batteries to do X” is a great way to be reasonably sure that X can’t be happening in a given attack, but the bracelet will probably have adequate batteries for its central use case.)
I would love to have an open source piece of security-centric hardware that collects lots of medical data and puts it ONLY on my reasonably secure desktop machine...
...but I have never found such a thing.
All of the health measurement stuff I’ve ever looked at closely is infested with commercial spyware and cloud bullshit.
Like the oura ring looks amazing and I (abstractly hypothetically) want one so so bad, but the oura ring hasn’t been publicly announced to be jailbroken yet, and so I can’t buy it, and reprogram it, and use it in a safe way...
...so it turns out in practice I don’t “want one of those exact things so bad” I want a simpler and less-adversarial version of that thing that I can’t easily find or make! :-(
If you don’t already have a feeling in your bones about how “privilege escalation attacks” can become arbitrarily bad, then I’m not sure what to say to change your mind...
...maybe I could point how how IoT baby monitors make your kids less safe?
...maybe I could point out that typing sounds could let someone steal laptop/desktop passwords with microphone access? (And I assume that most state actors have a large stock of such zero days ready to go for when WW3 starts.)
Getting more paranoid, and speaking of state actors, if I was running the CIA, or was acting in amoral behalf of ANY state actor using an algorithm to cybernetically exert control over history via high resolution measurements and plausibly deniable nudges, I’d probably find it useful to have a trace of the heart rate of lots of people in my database, along with their lat/lon, and their social graph, and all the rest of it.
It is a central plot point in some pretty decent fiction that you can change the course of history by figuring out the true emotional attachments of an influential person, and then causing one of these beloved “weak targets” to have a problem, and create a family crisis for the influential person at the same time as some other important event is happening.
Since **I** would find it useful if I was going to implement Evil Villain Plans I assume that others would also find uses for such things?
I don’t know!
There are so many uses for data!
And so much data collection is insecure by default!
The point of preventing privilege escalation and maintaining privacy is that if you do it right, via simple methods, that mostly just minimize attack surfaces, then you don’t even have to spend many brain cells on tracking safety concerns :-)
III. Default Safety From Saying No By Default
If you don’t have security mindset then hearing that “the S in ‘IoT’ stands for Security” maybe doesn’t sound like a stunning indictment of an entire industry, but… yeah…
...I won’t have that shit in my house.
Having one of those things sit in your living room, always powered on, is much worse to me than wearing “outside shoes” into one’s house one time. But both of these actions will involve roughly similar amounts of attention-or-decision-effort by the person who makes the mistake.
I want NO COMPUTERS in any of my hardware, to the degree possible, except where the computer is there in a way that lots of security reasoning has been applied to, and found “actively tolerable”.
(This is similar to me wanting NO HIGH FRUCTOSE CORN SYRUP in my food. Its a simple thing, that massively reduces the burden on my decision routines, in the current meta. It is just a heuristic. I can violate it for good reasons or exceptional circumstances, but the violations are generally worth the attention-or-decision-effort of noticing “oh hey this breaks a useful little rule… let me stop and think about whether I’m in an exceptional situation… I am! ok then… I’ll break the rule and its fine!”)
I still have a Honda Civic from the aughties that I love, that can’t be hacked and remotely driven around by anyone who wants to spend a 0 day, because it just doesn’t have that capacity at all. There’s no machine for turning a wheel or applying the brakes in that car, and no cameras (not even for backing up), and practically no computers, and no wifi hookup… its beautiful! <3
As hardware, that car is old enough to be intrinsically secure against whole classes of modern hacking attempts, and I love it partly for that reason <3
One of the many beautiful little bits of Accelerando that was delightful-world-building (though a creepy part of the story) is that the protagonist gets hacked by his pet robot, who whispers hypnotic advice to him while he’s sleeping, way way way earlier in the singularity than you’d naively expect.
The lucky part of that subplot is just that his pet robot hates him much less than it hates other things, and thinks of him in a proprietary way, and so he’s mostly “cared for” by his robot rather than egregiously exploited. Then when it gets smart enough, and goes off on its own to have adventures, it releases its de facto ownership of him and leaves him reasonably healthy… though later it loops back to interact with him as a trusted party.
I don’t remember the details, but it is suggested to have maybe been responsible for his divorce, like by fucking with his subconscious emotions toward his wife, who the robot saw as a competing “claimant” on the protagonist? But also the wife was kinda evil, so maybe that was protective?
Oh! See. Here’s another threat model…
...what if the “Apollo Neuro” (whose modes of vibration from moment-to-moment that you don’t control) really DOES affect your parasympathetic nervous system and thus really can “hack your emotions” and it claims to be doing this “for your health” and even the company tried to do it nicely...
...but then maybe it just isn’t secure and a Bad Hacker gets “audio access” (via your phone) and also “loose control of mood” (via the bracelet vibrations controlled by the phone) and writes a script to start giving you a bad mood around <some specific thing>, slowly training your likes and dislikes, without you ever noticing it?
Placebos are fake. Technology is different from “magic” (or placebos) because technology Actually Works. But also, anything that Actually Works can be weaponized, and one of the ways we know that magic is fake is that it has never been used to make a big difference in war. Cryptography has sorta maybe already been used to win wars. Even now? (Its hard to get clean info in an ongoing war, but lots of stuff around the Ukraine War only really makes sense if the US has been listening to a lot of the conversations inside of the Russian C&C loop, and sharing the intel with Ukraine.)
If you have a truly medically efficacious thing here, and you are connecting it to computers that are connected to the internet… eeeeek!
I personally “Just Say No” to the entire concept of the Internet Of Things.
It is just common sense to me that no one in the US military should be allowed to own or carry or use any consumer IoT devices. They get this wrong sometimes, and pay the price.
Once the number one concern of the median technology project is security, maybe I’ll change my mind, but for now… nope!
New computing hardware is simply not trustworthy by default. (In a deep sense: same as new medicine. Same as any new technology that (1) weaves itself deeply into your life, yet (2) whose principles of operation are not truly a part of you and likely to make your life better on purpose for legible and legibly safe reasons.)
I’m pretty surprised at how far this went, JenniferRM covered a surprisingly large proportion of the issue (although there’s a lot of tangents e.g. the FDA, etc so it also covered a lot of stuff in general). I’d say more, but I already said exactly as much as I was willing to say on the matter, and people inferred information all the way up to the upper limit of what I was willing to risk people inferring from that comment, so now I’m not really willing to risk saying much more. Have you heard about how CPUs might be reprogrammed to emit magnetic frequencies that transmit information through faraday cages and airgaps, and do you know if a similar process can turn a wide variety of chips into microphones via using the physical CPU/ram space as a magnetometer? I don’t know how to verify any this, since intelligence agencies love to make up stuff like this in the hopes of disrupting enemy agencies counterintelligence departments.
I’m not really sure how tractable this is for Elizabeth to worry about, especially since the device ultimately was recommended against, and anyway it’s Elizabeth seems to be more about high-EV experiments, rather than defending the AIS community from external threats. If the risk of mind-hacking or group-mind-hacking is interesting, a tractable project would be doing a study on EA-adjacents to see what happens if they completely quit social media and videos/shows cold-turkey, and only read books and use phones for 1-1 communication with friends during their leisure time. Modern entertainment media, by default, is engineered to surreptitiously steer people towards time-mismanagement. Maybe replace those hours with reading EA or rationalist texts. It’s definitely worth studying as the results could be consistent massive self-improvement, but it would be hard to get a large representative sample of people who are heavily invested in/attached to social media (i.e. the most relevant demographic).
I don’t understand your threat model at all. You’re worried about what sounds like a theoretical concern (or you would’ve provided examples of actual harm done) in the form of a cyber attack against AI safety people who wear these bracelets. Meanwhile we’re aware of a well-documented and omnipresent issue among AI safety people, namely mental health issues like depression, and better health (including from better sleep) helps to mitigate that. Why do you think that in this world, the calculation favors the former, rather than the latter?
(I’m aware that a similar line of argument is also used to derail AI safety concerns towards topics like AI bias. I would take this counterargument more seriously if LW had even a fraction of the concern for cybersecurity which it has for AI safety.)
Besides, why worry about cyberattacks rather than the community’s wrench vulnerability?