Yes, this is a sensible response; have you seen Tristan Harris’s Social Dilemma documentary? It’s a great introduction to some of the core concepts but not everything.
Modelling user’s behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage I’ve cited previously.
Generally, inducing mediocrity while on the site is a high priority, but it’s mainly about numbness and suppressing higher thought e.g. those referenced in Critch’s takeaways on CFAR and the sequences. They want the reactions to content to emerge from your true self, but they don’t want any of the other stuff that comes from higher thinking or self awareness.
You’re correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about “doomscrolling”); however, what they’re aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.
Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don’t put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user’s free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen).
On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).
Once again you are making a ton of confident statements and offering no actual evidence. “is a high priority”, “they want”, “they don’t want”, “what they’re aiming for is”, etc. So far as I can see you don’t in fact know any of this, and I don’t think you should state things as fact that you don’t have solid evidence for.
Yes, this is a sensible response; have you seen Tristan Harris’s Social Dilemma documentary? It’s a great introduction to some of the core concepts but not everything.
Modelling user’s behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage I’ve cited previously.
Generally, inducing mediocrity while on the site is a high priority, but it’s mainly about numbness and suppressing higher thought e.g. those referenced in Critch’s takeaways on CFAR and the sequences. They want the reactions to content to emerge from your true self, but they don’t want any of the other stuff that comes from higher thinking or self awareness.
You’re correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about “doomscrolling”); however, what they’re aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.
Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don’t put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user’s free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen).
On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).
Happy to talk or debate further tomorrow.
Once again you are making a ton of confident statements and offering no actual evidence. “is a high priority”, “they want”, “they don’t want”, “what they’re aiming for is”, etc. So far as I can see you don’t in fact know any of this, and I don’t think you should state things as fact that you don’t have solid evidence for.
They want data. They strongly prefer data on elites (and useful/relevant for analyzing and understanding elite behavior) over data on commoners.
We are not commoners.
These aren’t controversial statements, and if they are, they shouldn’t be.
Whenever someone uses “they,” I get nervous.