One thing to keep in mind is sampling biases in social media, which are HUGE.
Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the “median user” sees “the average person they follow” having more friends than them.
Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)
Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for “engagement”, and what could be more engaging than the opportunity to tell someone they are “wrong on the Internet”? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine’s think of them) they are going to what causes them to react.
I don’t know what is really happening to the actual “average mind” right now, but I don’t think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.
The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like “quality metrics” that they can try to boost (counting uses of the word “therefore” or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.
More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people’s bullshit detectors.
By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates… For a given “efficacy” of any kind of propaganda, more of the same tends to have less effect over time.
I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)
One thing to keep in mind is sampling biases in social media, which are HUGE.
Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the “median user” sees “the average person they follow” having more friends than them.
Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)
Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for “engagement”, and what could be more engaging than the opportunity to tell someone they are “wrong on the Internet”? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine’s think of them) they are going to what causes them to react.
I don’t know what is really happening to the actual “average mind” right now, but I don’t think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.
The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like “quality metrics” that they can try to boost (counting uses of the word “therefore” or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.
More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people’s bullshit detectors.
By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates… For a given “efficacy” of any kind of propaganda, more of the same tends to have less effect over time.
I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)