If the big 5 earn a trillion in annual revenue from ads (they don’t, the annual global spending is about 1T) you have to model it as utility gain. In a world where the analytics were cruder—using mere statistics similar to FICO and not DL—how much less value would they create/advertisers pay for. A big part of the revenue for big tech is just monopoly rents.
I would guess it’s well under 100B, possibly under 10B.
If all the ai models in the world bring in under 10B combined this would explain the slow progress in the field.
Apple makes ~500B in revenue per year and Microsoft makes around ~200B/y, and my argument in otherposts is that human manipulation is a big part of their business model/moat, including government ties which in turn guarantee favorable policies. I remain uncertain (this is clearly an update against), but I still think it’s possible that the value generated is potentially around 1 trillion per year; it’s hard to know what these companies would look like if they lost the moat they gained from influence capabilities, only that companies like Netflix and Twitter don’t have the security or data or required and they are much less valuable.
Although in the original context, I actually was basically saying “deepfakes make like no money whereas targeted influence makes like, trillions of dollars”.
Where content[] is all the possible videos youtube has the legal ability to show, a newspaper has the legal ability to publish, reddit has the legal ability to allow on their site, etc.
Several things immediately jump out at me. First:
1. filtering anythingcosts revenue. That is to say, on the aggregate, any kind of filter at all makes it less likely that the most engaging content won’t be shown. So the main “filter” used is for content that advertisers find unacceptable, not content that the tech company disagrees with. This means that youtube videos disparaging Youtube are just fine.
2. Deep political manipulation mostly needs a lot of filtering, and this lowers revenue. Choosing what news to show is the same idea.
3. Really destructive things, like how politics are severely polarized in the USA, and mass shootings, are likely an unintended consequence of picking the most engaging content.
4. Consider what happens if a breaking news event happens, or a major meme wave happens, and you filter it out because your platform finds the material detrimental to some long term goal. Well that means that the particular content is missing from your platform, and this sends engagement and revenue to competitors.
General idea: any kind of ulterior motive other than argmaxing for right now—whether it be tech companies trying to manipulate perception, or some plotting AI system trying to pull off a complex long term plan—costs you revenue, future ability to act, and has market pressure against it.
This reminds me of the general idea where AI systems are trying to survive, and somehow trading services with humans for things that they need. The thing is, in some situations, 99% or more of all the revenue paid to the AI service is going to just keeping it online. It’s not getting a lot of excess resources for some plan. Same idea for humans, what keeps humans ‘aligned’ is almost all resources are needed just to keep themselves alive, and to try to create enough offspring to replace their own failing bodies. There’s very little slack for say founding a private army with the resources to overthrow the government.
If the big 5 earn a trillion in annual revenue from ads (they don’t, the annual global spending is about 1T) you have to model it as utility gain. In a world where the analytics were cruder—using mere statistics similar to FICO and not DL—how much less value would they create/advertisers pay for. A big part of the revenue for big tech is just monopoly rents.
I would guess it’s well under 100B, possibly under 10B.
If all the ai models in the world bring in under 10B combined this would explain the slow progress in the field.
Apple makes ~500B in revenue per year and Microsoft makes around ~200B/y, and my argument in other posts is that human manipulation is a big part of their business model/moat, including government ties which in turn guarantee favorable policies. I remain uncertain (this is clearly an update against), but I still think it’s possible that the value generated is potentially around 1 trillion per year; it’s hard to know what these companies would look like if they lost the moat they gained from influence capabilities, only that companies like Netflix and Twitter don’t have the security or data or required and they are much less valuable.
Although in the original context, I actually was basically saying “deepfakes make like no money whereas targeted influence makes like, trillions of dollars”.
I had a little bit of a thought on this, and I think the argument extends to other domains.
Major tech companies, and all the advertising dependent businesses are choosing what content to display mainly to optimize revenue. Or,
content_shown = argmax(estimated_engagement( filtered(content[]) ) )
Where content[] is all the possible videos youtube has the legal ability to show, a newspaper has the legal ability to publish, reddit has the legal ability to allow on their site, etc.
Several things immediately jump out at me. First:
1. filtering anything costs revenue. That is to say, on the aggregate, any kind of filter at all makes it less likely that the most engaging content won’t be shown. So the main “filter” used is for content that advertisers find unacceptable, not content that the tech company disagrees with. This means that youtube videos disparaging Youtube are just fine.
2. Deep political manipulation mostly needs a lot of filtering, and this lowers revenue. Choosing what news to show is the same idea.
3. Really destructive things, like how politics are severely polarized in the USA, and mass shootings, are likely an unintended consequence of picking the most engaging content.
4. Consider what happens if a breaking news event happens, or a major meme wave happens, and you filter it out because your platform finds the material detrimental to some long term goal. Well that means that the particular content is missing from your platform, and this sends engagement and revenue to competitors.
General idea: any kind of ulterior motive other than argmaxing for right now—whether it be tech companies trying to manipulate perception, or some plotting AI system trying to pull off a complex long term plan—costs you revenue, future ability to act, and has market pressure against it.
This reminds me of the general idea where AI systems are trying to survive, and somehow trading services with humans for things that they need. The thing is, in some situations, 99% or more of all the revenue paid to the AI service is going to just keeping it online. It’s not getting a lot of excess resources for some plan. Same idea for humans, what keeps humans ‘aligned’ is almost all resources are needed just to keep themselves alive, and to try to create enough offspring to replace their own failing bodies. There’s very little slack for say founding a private army with the resources to overthrow the government.