This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go to see advertising specifically focused on informing about new ai manipulation capabilities. The reputation of whoever did this advertisement campaign would likely be tainted, imo correctly.
The project that does this would be presumably be defunded on succeeding because none of the techniques it developed work any more.
But wouldn’t you forgive its funders? Some people construct pressures for it to be self-funding, by tarring its funders by association, but that is the most dangerous possible model, because it creates a situation where they have the ability and incentive to perpetuate themselves beyond the completion of their mission.
This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go
I actually wrote a little about this last november, but before october 2023, I was dangerously clueless about this very important dynamic, and I still haven’t properly grappled with it.
Like, what if only 5% of the NSA knows about AI-optimized influence, and they’re trying to prevent 25% of the NSA from finding out because that could be a critical mass and they don’t know what it would do to the country and the world if awareness spread that far?
What if in 2019 the US and Russia and Chinese intelligence agencies knew that the tech was way too powerful, but Iran and South Africa and North Koreans didn’t, and the world becomes worse if they do?
What if IBM or JP Morgan found out in 2022 and started trying to join in the party? What if Disney predicted a storm is coming and took a high-risk strategy to join the big fish before it was too late (@Valentine)?
If we’re in an era of tightening nooses, world modelling becomes so much more complicated when you expand the circle beyond the big American and Chinese tech companies and intelligence agencies.
If Elon did this, everyone would leave twitter. Twitter lists is the next best thing, because it delegitimizes other social media platforms (because they can’t do it because they will lose all their money), in a way that benefits twitter.
Oh, you assume X is the only app that would be open to doing this? Hm
That’s a good assumption to call out, I’ve been thinking about X and lists a lot recently but I haven’t properly considered who would be working on improving civilization’s epistemics.
Elon musk and his people seem to be in favor of promoting prediction-market style improvements, which I think would probably more than compensate for influence tech because widespread prediction market adoption facilitates sanity and moloch-elimination, and the current state of influence tech is a primarily caused by civilizational derangement and moloch.
However, twitter isn’t secure or sovereign enough to properly do optimized influence, because they would get detected counteracted by the sovereign big fish like Facebook or the NSA. They can, on the other hand, pump out algorithms that reliably predict truth, and deploy them at scale, even if this threatens disinformation campaigns coming out of US state-adjacent agencies.
My understanding of advertisers is that they’re always trying to get a larger share of the gains from trade. I don’t know how successful advertisers tend to be due to information asymmetry and problems with sourcing talent; this is interesting with powerful orgs like wall street banks like JP Morgan which seems to be trying out their own business model for sensor-related tech.
I don’t think wikipedians are generally statistically minded, so when they say “routinely” they could mean it happens like 20 times a year. They’d probably notice most of them.
Hm my intuition would be that platforms aren’t totally on point at processing their data and they would want to offload that work to advertisers, and there are enough competing big platforms now (youtube, instagram, X, tiktok) that they might not have enough bargaining power to defend their integrity.
my intuition would be that platforms aren’t totally on point at processing their data
That’s interesting, we have almost opposite stances on this.
My intuition is that Instagram, youtube, and possibly tiktok are very on-point with processing their data, but with occasional catastrophic failures due to hacks by intelligence agencies which steal data and poison the original copy, and also incompetence/bloat (like spaghetti towers) or unexpected consequences from expanding into uncharted territory. Whereas Twitter/X lacks the security required to do much more than just show people ads based on easy-to-identify topics of interest.
Uh I guess I meant like, there’s no way they can do enough to give advertisers 30% of the value their data has without giving many databrokers (who advertisers contract) access to the data, because the advertisers needs are too diverse and the skill ceiling is very high. This equilibrium might not have realized yet but I’d guess eventually will.
We need to grind hard to produce the first wave of AI propaganda delivering the targeted payload that AI propaganda must be addressed.
This will likely elevate it to attention for adversarial actors who were not fully grasping its capabilities. There are plenty of adversarial actors who are grasping it of course, so this may be an acceptable tradeoff, but it might change the dynamics for weaker adversarial actors who would otherwise take a while to realize just how far they could go to see advertising specifically focused on informing about new ai manipulation capabilities. The reputation of whoever did this advertisement campaign would likely be tainted, imo correctly.
The project that does this would be presumably be defunded on succeeding because none of the techniques it developed work any more.
But wouldn’t you forgive its funders? Some people construct pressures for it to be self-funding, by tarring its funders by association, but that is the most dangerous possible model, because it creates a situation where they have the ability and incentive to perpetuate themselves beyond the completion of their mission.
I actually wrote a little about this last november, but before october 2023, I was dangerously clueless about this very important dynamic, and I still haven’t properly grappled with it.
Like, what if only 5% of the NSA knows about AI-optimized influence, and they’re trying to prevent 25% of the NSA from finding out because that could be a critical mass and they don’t know what it would do to the country and the world if awareness spread that far?
What if in 2019 the US and Russia and Chinese intelligence agencies knew that the tech was way too powerful, but Iran and South Africa and North Koreans didn’t, and the world becomes worse if they do?
What if IBM or JP Morgan found out in 2022 and started trying to join in the party? What if Disney predicted a storm is coming and took a high-risk strategy to join the big fish before it was too late (@Valentine)?
If we’re in an era of tightening nooses, world modelling becomes so much more complicated when you expand the circle beyond the big American and Chinese tech companies and intelligence agencies.
If Elon did this, everyone would leave twitter. Twitter lists is the next best thing, because it delegitimizes other social media platforms (because they can’t do it because they will lose all their money), in a way that benefits twitter.
Oh, you assume X is the only app that would be open to doing this? Hm
If it actually requires detailed user monitoring, I wonder if there are any popular platforms that let some advertisers have access to that.
I think it might not require that, in which case support from the app itself isn’t needed.
That’s a good assumption to call out, I’ve been thinking about X and lists a lot recently but I haven’t properly considered who would be working on improving civilization’s epistemics.
Elon musk and his people seem to be in favor of promoting prediction-market style improvements, which I think would probably more than compensate for influence tech because widespread prediction market adoption facilitates sanity and moloch-elimination, and the current state of influence tech is a primarily caused by civilizational derangement and moloch.
However, twitter isn’t secure or sovereign enough to properly do optimized influence, because they would get detected counteracted by the sovereign big fish like Facebook or the NSA. They can, on the other hand, pump out algorithms that reliably predict truth, and deploy them at scale, even if this threatens disinformation campaigns coming out of US state-adjacent agencies.
I’m not sure whether to trust Yudkowsky, who says community notes seem to work well, or Wikipedia, which claims to list tons of cases where community notes are routinely wrong or exploited by randos (rather than just state-level actors who can exploit anything); if Wikipedia is right and the tech isn’t finished, then it’s all aspirational and we can’t yet evaluate whether Elon and Twitter are committed to civilization.
My understanding of advertisers is that they’re always trying to get a larger share of the gains from trade. I don’t know how successful advertisers tend to be due to information asymmetry and problems with sourcing talent; this is interesting with powerful orgs like wall street banks like JP Morgan which seems to be trying out their own business model for sensor-related tech.
I don’t think wikipedians are generally statistically minded, so when they say “routinely” they could mean it happens like 20 times a year. They’d probably notice most of them.
Hm my intuition would be that platforms aren’t totally on point at processing their data and they would want to offload that work to advertisers, and there are enough competing big platforms now (youtube, instagram, X, tiktok) that they might not have enough bargaining power to defend their integrity.
That’s interesting, we have almost opposite stances on this.
My intuition is that Instagram, youtube, and possibly tiktok are very on-point with processing their data, but with occasional catastrophic failures due to hacks by intelligence agencies which steal data and poison the original copy, and also incompetence/bloat (like spaghetti towers) or unexpected consequences from expanding into uncharted territory. Whereas Twitter/X lacks the security required to do much more than just show people ads based on easy-to-identify topics of interest.
Uh I guess I meant like, there’s no way they can do enough to give advertisers 30% of the value their data has without giving many databrokers (who advertisers contract) access to the data, because the advertisers needs are too diverse and the skill ceiling is very high. This equilibrium might not have realized yet but I’d guess eventually will.