Let us suppose that social media apps and sites are, as you imply, in the business of trying to build sophisticated models of their users’ mental structures. (I am not convinced they are—I think what they’re after is much simpler—but I could be wrong, they might be doing that in the future even if not now, and I’m happy to stipulate it for the moment.)
If so, I suggest that they’re not doing that just in order to predict what the users will do while they’re in the app / on the site. They want to be able to tell advertisers “_this_ user is likely to end up buying your product”, or (in a more paranoid version of things) to be able to tell intelligence agencies “_this_ user is likely to engage in terrorism in the next six months”.
So inducing “mediocrity” is of limited value if they can only make their users more mediocre while they are in the app / on the site. In fact, it may be actively counterproductive. If you want to observe someone while they’re on TikTok and use those observations to predict what they will do when they’re not on TikTok, then putting them into an atypical-for-them mental state that makes them less different from other people while on TikTok seems like the exact opposite of what you want to do.
I don’t know of any good reason to think it at all likely that social media apps/sites have the ability to render people substantially more “mediocre” permanently, so as to make their actions when not in the app / on the site more predictable.
If the above is correct, then perhaps we should expect social media apps and sites to be actively trying not to induce mediocrity in their users.
Of course it might not be correct. I don’t actually know what changes in users’ mental states are most helpful to social media providers’ attempts to model said users, in terms of maximizing profit or whatever other things they actually care about. Are you claiming that you do? Because this seems like a difficult and subtle question involving highly nontrivial questions of psychology, of what can actually be done by social media apps and sites, of the details of their goals, etc., and I see no reason for either of us to be confident that you know those things. And yet you are happy to declare with what seems like utter confidence that of course social media apps and sites will be trying to induce mediocrity in order to make users more predictable. How do you know?
Yes, this is a sensible response; have you seen Tristan Harris’s Social Dilemma documentary? It’s a great introduction to some of the core concepts but not everything.
Modelling user’s behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage I’ve cited previously.
Generally, inducing mediocrity while on the site is a high priority, but it’s mainly about numbness and suppressing higher thought e.g. those referenced in Critch’s takeaways on CFAR and the sequences. They want the reactions to content to emerge from your true self, but they don’t want any of the other stuff that comes from higher thinking or self awareness.
You’re correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about “doomscrolling”); however, what they’re aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.
Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don’t put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user’s free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen).
On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).
Once again you are making a ton of confident statements and offering no actual evidence. “is a high priority”, “they want”, “they don’t want”, “what they’re aiming for is”, etc. So far as I can see you don’t in fact know any of this, and I don’t think you should state things as fact that you don’t have solid evidence for.
Let us suppose that social media apps and sites are, as you imply, in the business of trying to build sophisticated models of their users’ mental structures. (I am not convinced they are—I think what they’re after is much simpler—but I could be wrong, they might be doing that in the future even if not now, and I’m happy to stipulate it for the moment.)
If so, I suggest that they’re not doing that just in order to predict what the users will do while they’re in the app / on the site. They want to be able to tell advertisers “_this_ user is likely to end up buying your product”, or (in a more paranoid version of things) to be able to tell intelligence agencies “_this_ user is likely to engage in terrorism in the next six months”.
So inducing “mediocrity” is of limited value if they can only make their users more mediocre while they are in the app / on the site. In fact, it may be actively counterproductive. If you want to observe someone while they’re on TikTok and use those observations to predict what they will do when they’re not on TikTok, then putting them into an atypical-for-them mental state that makes them less different from other people while on TikTok seems like the exact opposite of what you want to do.
I don’t know of any good reason to think it at all likely that social media apps/sites have the ability to render people substantially more “mediocre” permanently, so as to make their actions when not in the app / on the site more predictable.
If the above is correct, then perhaps we should expect social media apps and sites to be actively trying not to induce mediocrity in their users.
Of course it might not be correct. I don’t actually know what changes in users’ mental states are most helpful to social media providers’ attempts to model said users, in terms of maximizing profit or whatever other things they actually care about. Are you claiming that you do? Because this seems like a difficult and subtle question involving highly nontrivial questions of psychology, of what can actually be done by social media apps and sites, of the details of their goals, etc., and I see no reason for either of us to be confident that you know those things. And yet you are happy to declare with what seems like utter confidence that of course social media apps and sites will be trying to induce mediocrity in order to make users more predictable. How do you know?
Yes, this is a sensible response; have you seen Tristan Harris’s Social Dilemma documentary? It’s a great introduction to some of the core concepts but not everything.
Modelling user’s behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage I’ve cited previously.
Generally, inducing mediocrity while on the site is a high priority, but it’s mainly about numbness and suppressing higher thought e.g. those referenced in Critch’s takeaways on CFAR and the sequences. They want the reactions to content to emerge from your true self, but they don’t want any of the other stuff that comes from higher thinking or self awareness.
You’re correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about “doomscrolling”); however, what they’re aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.
Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don’t put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user’s free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen).
On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).
Happy to talk or debate further tomorrow.
Once again you are making a ton of confident statements and offering no actual evidence. “is a high priority”, “they want”, “they don’t want”, “what they’re aiming for is”, etc. So far as I can see you don’t in fact know any of this, and I don’t think you should state things as fact that you don’t have solid evidence for.
They want data. They strongly prefer data on elites (and useful/relevant for analyzing and understanding elite behavior) over data on commoners.
We are not commoners.
These aren’t controversial statements, and if they are, they shouldn’t be.
Whenever someone uses “they,” I get nervous.