Each news organization is simply trying to maximize it’s revenue. For institutions like the New York times, they have an additional constraint to maintain the reputation—which constrains them by both Overton windows and that they generally have to have a source for their claimed facts. [this has become a problem because of internet competitors that have decided to just make up the facts or use very weak signals.]
So you get the articles you see. Yes, it has these meta feedback loops, but it isn’t intentional. Everyone is just trying to act in their perceived best interests and we get this craziness.
How can we make it better? The problem seems to have 2 components:
a. Out of all the news in the world, the selected portion most adults will see may not maximize social utility [arguable, because it’s a market and each reader is choosing and giving feedback to their choice, an economist might argue that clickbait is just the greatest good to the greatest many]
b. While we share a ground truth reality, and by weighting the quality of the evidence of various sources it is possible to deduce what the ground truth is, the headlines are flooded with lies and bad conclusions.
I don’t know how to solve either. I will mention that the clickbait problem is succinctly:
There are common human needs, like those for attractive mates or to lose weight, but the clickbait is never providing the real information that might satisfy those needs.
And (b) seems to require a queriable AI oracle. That actually seems achievable. “computer, how many individuals were at the Trump inauguration”. We have various tools to translate that query into searchable terms [GPT-3], other tools that could search for every possible source of information [crawling systems like google’s], and then you would need an engine to construct an answer by weighting each source by quality.
Occam’s razor:
Each news organization is simply trying to maximize it’s revenue. For institutions like the New York times, they have an additional constraint to maintain the reputation—which constrains them by both Overton windows and that they generally have to have a source for their claimed facts. [this has become a problem because of internet competitors that have decided to just make up the facts or use very weak signals.]
So you get the articles you see. Yes, it has these meta feedback loops, but it isn’t intentional. Everyone is just trying to act in their perceived best interests and we get this craziness.
How can we make it better? The problem seems to have 2 components:
a. Out of all the news in the world, the selected portion most adults will see may not maximize social utility [arguable, because it’s a market and each reader is choosing and giving feedback to their choice, an economist might argue that clickbait is just the greatest good to the greatest many]
b. While we share a ground truth reality, and by weighting the quality of the evidence of various sources it is possible to deduce what the ground truth is, the headlines are flooded with lies and bad conclusions.
I don’t know how to solve either. I will mention that the clickbait problem is succinctly:
There are common human needs, like those for attractive mates or to lose weight, but the clickbait is never providing the real information that might satisfy those needs.
And (b) seems to require a queriable AI oracle. That actually seems achievable. “computer, how many individuals were at the Trump inauguration”. We have various tools to translate that query into searchable terms [GPT-3], other tools that could search for every possible source of information [crawling systems like google’s], and then you would need an engine to construct an answer by weighting each source by quality.