Trevor, when I see this problem, which expands to a general problem that any image, audio, or video recording can in the near future be faked—I don’t see any obvious solutions except for ‘white channel’ protection. You are talking about a subset of the problem—content that has been manipulated to cause a particular response from the victim.
Meaning that since in the near future any website on the internet can be totally falsified, any message or email you get from a ‘real human’ may be from an AI—the only way to know if anything is real is to have some kind of list of trusted agents.
Real human identities for emails, chats etc, some kind of license for legitimate news agencies, digital signatures and security mechanisms to verify senders and websites as belonging to actually legitimate owners.
Cameras almost need to be constantly streaming to a trusted third party server, kind of how if your phone uploads to Google photos shortly after an event happens, or the police upload body camera footage to evidence.com , it is less likely with present technology that the records are completely fake.
Have you thought more on this? Is there any legislation proposed or discussion of how to deal with this problem?
Yes, my thinking about deepfakes is that they haven’t been succeeding at influence, making no money, and everyone worries about them; whereas automated targeting and response prediction have been succeeding at influence with flying colors, making trillions of dollars and dominating the economy, and nobody worries about them.
I’m sympathetic to the risk that generative AI will be able to output combinations of words that are well-suited to the human mind, and also generated or AI-altered images with totally invisible features that affect the deep structures of the brain to cause people to measurably think about a targeted concept more frequently. But I see lots of people working on preventing that even though it hasn’t been invented yet, and I don’t currently see that tech succeeding at all without measurability and user data/continuous feedback.
I don’t see this tech as a global catastrophic risk to steal priority from AI or Biorisk, but as an opportunity for world modelling (e.g. understanding how AI plays into US-China affairs) and for reducing the AI safety community’s vulnerability to attacks as AI safety and orgs like OpenAI become more important on the global stage.
Yes, my thinking about deepfakes is that they haven’t been succeeding at influence, making no money, and everyone worries about them; whereas automated targeting and response prediction have been succeeding at influence with flying colors, making trillions of dollars and dominating the economy, and nobody worries about them.
Anecdotally, the ads I get on mobile YouTube have been increasingly high ratio of AI aided image generation. I recognize it because I like to mess with it and know intuitively what it can’t currently do well but I expect most people wouldn’t realize what kind of edited the pictures are, and the advertiser is one of those very big companies that you hadn’t heard of a year ago.
whereas automated targeting and response prediction have been succeeding at influence with flying colors, making trillions of dollars and dominating the economy, and nobody worries about them.
Do you have direct data on that? Consumer preferences are affected by advertising, although they are also affected by cost and how mainstream products tend to be pretty good simply from generations of consumer preference.
For example on the margin, does recsys “make” trillions?
Sorry, I was referring to the big 5 tech companies, which made trillions of dollars for investors in aggregate. I was more thinking about how prominent the tech is compared to deepfakes, not the annual revenue gained from that tech alone (although that figure might also be around 1 trillion per year depending on how much of their business model is totally dependent on predictive analytics).
If the big 5 earn a trillion in annual revenue from ads (they don’t, the annual global spending is about 1T) you have to model it as utility gain. In a world where the analytics were cruder—using mere statistics similar to FICO and not DL—how much less value would they create/advertisers pay for. A big part of the revenue for big tech is just monopoly rents.
I would guess it’s well under 100B, possibly under 10B.
If all the ai models in the world bring in under 10B combined this would explain the slow progress in the field.
Apple makes ~500B in revenue per year and Microsoft makes around ~200B/y, and my argument in otherposts is that human manipulation is a big part of their business model/moat, including government ties which in turn guarantee favorable policies. I remain uncertain (this is clearly an update against), but I still think it’s possible that the value generated is potentially around 1 trillion per year; it’s hard to know what these companies would look like if they lost the moat they gained from influence capabilities, only that companies like Netflix and Twitter don’t have the security or data or required and they are much less valuable.
Although in the original context, I actually was basically saying “deepfakes make like no money whereas targeted influence makes like, trillions of dollars”.
Where content[] is all the possible videos youtube has the legal ability to show, a newspaper has the legal ability to publish, reddit has the legal ability to allow on their site, etc.
Several things immediately jump out at me. First:
1. filtering anythingcosts revenue. That is to say, on the aggregate, any kind of filter at all makes it less likely that the most engaging content won’t be shown. So the main “filter” used is for content that advertisers find unacceptable, not content that the tech company disagrees with. This means that youtube videos disparaging Youtube are just fine.
2. Deep political manipulation mostly needs a lot of filtering, and this lowers revenue. Choosing what news to show is the same idea.
3. Really destructive things, like how politics are severely polarized in the USA, and mass shootings, are likely an unintended consequence of picking the most engaging content.
4. Consider what happens if a breaking news event happens, or a major meme wave happens, and you filter it out because your platform finds the material detrimental to some long term goal. Well that means that the particular content is missing from your platform, and this sends engagement and revenue to competitors.
General idea: any kind of ulterior motive other than argmaxing for right now—whether it be tech companies trying to manipulate perception, or some plotting AI system trying to pull off a complex long term plan—costs you revenue, future ability to act, and has market pressure against it.
This reminds me of the general idea where AI systems are trying to survive, and somehow trading services with humans for things that they need. The thing is, in some situations, 99% or more of all the revenue paid to the AI service is going to just keeping it online. It’s not getting a lot of excess resources for some plan. Same idea for humans, what keeps humans ‘aligned’ is almost all resources are needed just to keep themselves alive, and to try to create enough offspring to replace their own failing bodies. There’s very little slack for say founding a private army with the resources to overthrow the government.
Trevor, when I see this problem, which expands to a general problem that any image, audio, or video recording can in the near future be faked—I don’t see any obvious solutions except for ‘white channel’ protection. You are talking about a subset of the problem—content that has been manipulated to cause a particular response from the victim.
Meaning that since in the near future any website on the internet can be totally falsified, any message or email you get from a ‘real human’ may be from an AI—the only way to know if anything is real is to have some kind of list of trusted agents.
Real human identities for emails, chats etc, some kind of license for legitimate news agencies, digital signatures and security mechanisms to verify senders and websites as belonging to actually legitimate owners.
Cameras almost need to be constantly streaming to a trusted third party server, kind of how if your phone uploads to Google photos shortly after an event happens, or the police upload body camera footage to evidence.com , it is less likely with present technology that the records are completely fake.
Have you thought more on this? Is there any legislation proposed or discussion of how to deal with this problem?
Yes, my thinking about deepfakes is that they haven’t been succeeding at influence, making no money, and everyone worries about them; whereas automated targeting and response prediction have been succeeding at influence with flying colors, making trillions of dollars and dominating the economy, and nobody worries about them.
I’m sympathetic to the risk that generative AI will be able to output combinations of words that are well-suited to the human mind, and also generated or AI-altered images with totally invisible features that affect the deep structures of the brain to cause people to measurably think about a targeted concept more frequently. But I see lots of people working on preventing that even though it hasn’t been invented yet, and I don’t currently see that tech succeeding at all without measurability and user data/continuous feedback.
I don’t see this tech as a global catastrophic risk to steal priority from AI or Biorisk, but as an opportunity for world modelling (e.g. understanding how AI plays into US-China affairs) and for reducing the AI safety community’s vulnerability to attacks as AI safety and orgs like OpenAI become more important on the global stage.
Anecdotally, the ads I get on mobile YouTube have been increasingly high ratio of AI aided image generation. I recognize it because I like to mess with it and know intuitively what it can’t currently do well but I expect most people wouldn’t realize what kind of edited the pictures are, and the advertiser is one of those very big companies that you hadn’t heard of a year ago.
Do you have direct data on that? Consumer preferences are affected by advertising, although they are also affected by cost and how mainstream products tend to be pretty good simply from generations of consumer preference.
For example on the margin, does recsys “make” trillions?
Sorry, I was referring to the big 5 tech companies, which made trillions of dollars for investors in aggregate. I was more thinking about how prominent the tech is compared to deepfakes, not the annual revenue gained from that tech alone (although that figure might also be around 1 trillion per year depending on how much of their business model is totally dependent on predictive analytics).
If the big 5 earn a trillion in annual revenue from ads (they don’t, the annual global spending is about 1T) you have to model it as utility gain. In a world where the analytics were cruder—using mere statistics similar to FICO and not DL—how much less value would they create/advertisers pay for. A big part of the revenue for big tech is just monopoly rents.
I would guess it’s well under 100B, possibly under 10B.
If all the ai models in the world bring in under 10B combined this would explain the slow progress in the field.
Apple makes ~500B in revenue per year and Microsoft makes around ~200B/y, and my argument in other posts is that human manipulation is a big part of their business model/moat, including government ties which in turn guarantee favorable policies. I remain uncertain (this is clearly an update against), but I still think it’s possible that the value generated is potentially around 1 trillion per year; it’s hard to know what these companies would look like if they lost the moat they gained from influence capabilities, only that companies like Netflix and Twitter don’t have the security or data or required and they are much less valuable.
Although in the original context, I actually was basically saying “deepfakes make like no money whereas targeted influence makes like, trillions of dollars”.
I had a little bit of a thought on this, and I think the argument extends to other domains.
Major tech companies, and all the advertising dependent businesses are choosing what content to display mainly to optimize revenue. Or,
content_shown = argmax(estimated_engagement( filtered(content[]) ) )
Where content[] is all the possible videos youtube has the legal ability to show, a newspaper has the legal ability to publish, reddit has the legal ability to allow on their site, etc.
Several things immediately jump out at me. First:
1. filtering anything costs revenue. That is to say, on the aggregate, any kind of filter at all makes it less likely that the most engaging content won’t be shown. So the main “filter” used is for content that advertisers find unacceptable, not content that the tech company disagrees with. This means that youtube videos disparaging Youtube are just fine.
2. Deep political manipulation mostly needs a lot of filtering, and this lowers revenue. Choosing what news to show is the same idea.
3. Really destructive things, like how politics are severely polarized in the USA, and mass shootings, are likely an unintended consequence of picking the most engaging content.
4. Consider what happens if a breaking news event happens, or a major meme wave happens, and you filter it out because your platform finds the material detrimental to some long term goal. Well that means that the particular content is missing from your platform, and this sends engagement and revenue to competitors.
General idea: any kind of ulterior motive other than argmaxing for right now—whether it be tech companies trying to manipulate perception, or some plotting AI system trying to pull off a complex long term plan—costs you revenue, future ability to act, and has market pressure against it.
This reminds me of the general idea where AI systems are trying to survive, and somehow trading services with humans for things that they need. The thing is, in some situations, 99% or more of all the revenue paid to the AI service is going to just keeping it online. It’s not getting a lot of excess resources for some plan. Same idea for humans, what keeps humans ‘aligned’ is almost all resources are needed just to keep themselves alive, and to try to create enough offspring to replace their own failing bodies. There’s very little slack for say founding a private army with the resources to overthrow the government.