This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go to see advertising specifically focused on informing about new ai manipulation capabilities. The reputation of whoever did this advertisement campaign would likely be tainted, imo correctly.
The project that does this would be presumably be defunded on succeeding because none of the techniques it developed work any more.
But wouldn’t you forgive its funders? Some people construct pressures for it to be self-funding, by tarring its funders by association, but that is the most dangerous possible model, because it creates a situation where they have the ability and incentive to perpetuate themselves beyond the completion of their mission.
This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go
I actually wrote a little about this last november, but before october 2023, I was dangerously clueless about this very important dynamic, and I still haven’t properly grappled with it.
Like, what if only 5% of the NSA knows about AI-optimized influence, and they’re trying to prevent 25% of the NSA from finding out because that could be a critical mass and they don’t know what it would do to the country and the world if awareness spread that far?
What if in 2019 the US and Russia and Chinese intelligence agencies knew that the tech was way too powerful, but Iran and South Africa and North Koreans didn’t, and the world becomes worse if they do?
What if IBM or JP Morgan found out in 2022 and started trying to join in the party? What if Disney predicted a storm is coming and took a high-risk strategy to join the big fish before it was too late (@Valentine)?
If we’re in an era of tightening nooses, world modelling becomes so much more complicated when you expand the circle beyond the big American and Chinese tech companies and intelligence agencies.
This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go to see advertising specifically focused on informing about new ai manipulation capabilities. The reputation of whoever did this advertisement campaign would likely be tainted, imo correctly.
The project that does this would be presumably be defunded on succeeding because none of the techniques it developed work any more.
But wouldn’t you forgive its funders? Some people construct pressures for it to be self-funding, by tarring its funders by association, but that is the most dangerous possible model, because it creates a situation where they have the ability and incentive to perpetuate themselves beyond the completion of their mission.
I actually wrote a little about this last november, but before october 2023, I was dangerously clueless about this very important dynamic, and I still haven’t properly grappled with it.
Like, what if only 5% of the NSA knows about AI-optimized influence, and they’re trying to prevent 25% of the NSA from finding out because that could be a critical mass and they don’t know what it would do to the country and the world if awareness spread that far?
What if in 2019 the US and Russia and Chinese intelligence agencies knew that the tech was way too powerful, but Iran and South Africa and North Koreans didn’t, and the world becomes worse if they do?
What if IBM or JP Morgan found out in 2022 and started trying to join in the party? What if Disney predicted a storm is coming and took a high-risk strategy to join the big fish before it was too late (@Valentine)?
If we’re in an era of tightening nooses, world modelling becomes so much more complicated when you expand the circle beyond the big American and Chinese tech companies and intelligence agencies.