This postis notabout things like government panopticons, hiding your information from ‘the public internet’, blurring your face on online videos, hiding from people who might look you up on Google or Facebook, or hackers getting access to your information, etc… While these issues might also be problematic, they do not pose x-risks in my mind.
This postisabout things like surveillance capitalism or technofeudalism leading to an unregulatable and eventually uncontrollable Robust Agent Agnostic Process (RAAP)[1]. This causes an increasing disconnect between the perception of reality and true on-ground reality which ultimately leads to epistemic failure scenarios by enabling deceptive or power-seeking behavior to go unchecked for too long.
I’m very surprised to see such a delineation made by someone who has otherwise done stellar open-source research. This specific delineation is actually something that, 3 years ago, I predicted would be part of an “increasing disconnect between the perception of reality and true on-ground reality which ultimately leads to epistemic failure scenarios by enabling deceptive or power-seeking behavior to go unchecked for too long”. Namely, because these systems are impossible to run at a large enough scale, without being defended and dominated by intelligence agencies like the NSA e.g. due to data poisoning and OS backdoors. This facet of the system is very poorly understood among people in the Bay Area, whereas people in DC (including me) have more than enough background to understand that part, but very rarely understand the math that you describe here, which is also necessary to understand why intelligence agencies would get involved in the first place.
This is not an X-risk or S-risk and should not be considered so by the AI safety community. That would be stepping on the toes of powerful and dangerous people, not the inert market dynamics you imagine. I describe in theseposts why it is a terrible idea for the AI safety community to step on the toes of powerful people; it doesn’t help that people adjacent to EA are already trying to mess with Gain-of-Function research AND AI capabilities advancement, both of which are near the top of the list of key military priority technologies. Messing with the NSA is a terrible thing to layer on top of that.
This post decisively demonstrates that AI Safety is dropping the ball on something. But I doubt that privacy is that thing. I definitely think it’s possible that some people adjacent to Lesswrong could develop a privacy preserving technology that would disrupt the current state of geopolitics, but that would just cause all the power players in the system to have an allergic reaction to it and possibly all of AI safety. In order to understand the role of AI in geopolitics, things like this post are required reading, but largely because the technology you describe is already locked in and AI geopolitics revolves around in.
I can’t support your policy proposals on system change because I predict that they will result in catastrophe, basically throwing all of AI safety into the meat grinder. This technology is already thoroughly implemented, and is necessary for any government that doesn’t want to end up like East Germany. I’ve spent the last 3 years terrified that someone would discover the math and write a post like this, starting a cascade that damages US national security and gets AI safety blamed for it. However, it’s also true that AI safety is ABSOLUTELY dropping the ball on this topic, and by remaining ignorant of it, will probably end up shredded by invisible helicopter blades. So definitely continue doing this research, and don’t get discouraged by the idiots who let the media coverage of Cambridge Analytica trick them into thinking that predictive analytics doesn’t work on humans.
But please conduct that research in a way that reduces the probability that AI safety ends up in the meat grinder, not increases the probability that AI safety ends up in the meat grinder. That is what I’ve been doing.
I’m very surprised to see such a delineation made by someone who has otherwise done stellar open-source research. This specific delineation is actually something that, 3 years ago, I predicted would be part of an “increasing disconnect between the perception of reality and true on-ground reality which ultimately leads to epistemic failure scenarios by enabling deceptive or power-seeking behavior to go unchecked for too long”. Namely, because these systems are impossible to run at a large enough scale, without being defended and dominated by intelligence agencies like the NSA e.g. due to data poisoning and OS backdoors. This facet of the system is very poorly understood among people in the Bay Area, whereas people in DC (including me) have more than enough background to understand that part, but very rarely understand the math that you describe here, which is also necessary to understand why intelligence agencies would get involved in the first place.
This is not an X-risk or S-risk and should not be considered so by the AI safety community. That would be stepping on the toes of powerful and dangerous people, not the inert market dynamics you imagine. I describe in these posts why it is a terrible idea for the AI safety community to step on the toes of powerful people; it doesn’t help that people adjacent to EA are already trying to mess with Gain-of-Function research AND AI capabilities advancement, both of which are near the top of the list of key military priority technologies. Messing with the NSA is a terrible thing to layer on top of that.
This post decisively demonstrates that AI Safety is dropping the ball on something. But I doubt that privacy is that thing. I definitely think it’s possible that some people adjacent to Lesswrong could develop a privacy preserving technology that would disrupt the current state of geopolitics, but that would just cause all the power players in the system to have an allergic reaction to it and possibly all of AI safety. In order to understand the role of AI in geopolitics, things like this post are required reading, but largely because the technology you describe is already locked in and AI geopolitics revolves around in.
I can’t support your policy proposals on system change because I predict that they will result in catastrophe, basically throwing all of AI safety into the meat grinder. This technology is already thoroughly implemented, and is necessary for any government that doesn’t want to end up like East Germany. I’ve spent the last 3 years terrified that someone would discover the math and write a post like this, starting a cascade that damages US national security and gets AI safety blamed for it. However, it’s also true that AI safety is ABSOLUTELY dropping the ball on this topic, and by remaining ignorant of it, will probably end up shredded by invisible helicopter blades. So definitely continue doing this research, and don’t get discouraged by the idiots who let the media coverage of Cambridge Analytica trick them into thinking that predictive analytics doesn’t work on humans.
But please conduct that research in a way that reduces the probability that AI safety ends up in the meat grinder, not increases the probability that AI safety ends up in the meat grinder. That is what I’ve been doing.