Although clown attacks may seem mundane on their own, they are a case study proving that powerful human thought steering technologies have probably already been invented, deployed, and tested at scale by AI companies, and are reasonably likely to end up being weaponized against the entire AI safety community at some point in the next 10 years.
I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately—the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don’t see evidence is that deliberate clown attacks are widespread. And specifically, I don’t see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)
I think it’s fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don’t have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.
I agree with some of this. I admit that I’ve been surprised several times by leading AI safety community orgs outperforming my expectations, from Openphil to MIRI to OpenAI. However, considering the rate that the world has been changing, I thing that the distance between 2023 and 2033 is more like the distance between 2023 and 2003, and the whole point of this post is taking a step back and looking at the situation which is actually pretty bad.
I think that between the US/China AI competition, and the AI companies also competing with each other under the US umbrella, as well as against dark AI companies like Facebook and companies that might be started indigenously by Microsoft or Apple or Amazon under their full control, and the possibility of the US government taking a treacherous turn and becoming less democratic more broadly (e.g. due to human behavior manipulation technology), I’m still pessimistic that the 2020s have more than a 50% chance of going well for AI safety. For example, the AI safety community might theoretically be forced to choose between rallying behind a pause vs. leaving humanity to die, and if they were to choose the pause in that hypothetical, then it’s reasonable to anticipate a 40% chance of conflict.
I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately—the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don’t see evidence is that deliberate clown attacks are widespread. And specifically, I don’t see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)
I think it’s fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don’t have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.
I agree with some of this. I admit that I’ve been surprised several times by leading AI safety community orgs outperforming my expectations, from Openphil to MIRI to OpenAI. However, considering the rate that the world has been changing, I thing that the distance between 2023 and 2033 is more like the distance between 2023 and 2003, and the whole point of this post is taking a step back and looking at the situation which is actually pretty bad.
I think that between the US/China AI competition, and the AI companies also competing with each other under the US umbrella, as well as against dark AI companies like Facebook and companies that might be started indigenously by Microsoft or Apple or Amazon under their full control, and the possibility of the US government taking a treacherous turn and becoming less democratic more broadly (e.g. due to human behavior manipulation technology), I’m still pessimistic that the 2020s have more than a 50% chance of going well for AI safety. For example, the AI safety community might theoretically be forced to choose between rallying behind a pause vs. leaving humanity to die, and if they were to choose the pause in that hypothetical, then it’s reasonable to anticipate a 40% chance of conflict.