In some ways Google and Facebook already create echo chambers as they personalize your search results or your feed trying to maximize “user engagement”.
True, and moreover the internet itself promotes the creation of echo chambers because it’s very easy to choose only those sources of information which confirm your existing views and beliefs.
I think it’s a pretty serious problem and one that we should try to not make any worse.
Maybe we could somehow combine the creation of echo chambers with exploration.
I do not have a specific solution, but here is what I am trying to achieve: Suppose that I have a political view X, and I oppose a political view Y. Let’s say that I intolerantly believe that most people from Y are idiots. Then, at some moment, somehow (perhaps by clicking a button: “I feel adventurous today, show me a random article outside of my bubble”), I find one person with a view Y who seems smart. I still believe the others are idiots, but this one is an exception. So I decide to expland my bubble by including this specific person, and stuff recommended by them.
For this solution to work, we need a good scoring algorithm. Because I only respect one specific person from the group Y, but dislike most of the others. So if the system will give me articles recommended by this specific person, I may enjoy them, but if it just give me articles recommended by people from group Y generally, I will dislike them. So the ability to build very specific bubble is necessary to have the ability to explore other people’s bubbles.
My intuition is that a bubble made for one person is better than a bubble made for a group. And even better would be if the algorithm could recognize different “factors” within that person. For example, there would be a bubble of information optimized for me, but someone could choose that they want to see my favorite articles about rationality, but not my favorite articles about programming; and the system could classify them correctly. Analogically, there could be a person that is really good at some topic, but completely mindkilled about some other topic, so I would decide to follow them only about that one topic.
This is just a hypothesis without good suppport, but maybe the problem of echo chambers today is that they are too big. If you create a chamber for 100 people, there may be 20 loud idiots, and the whole chamber will be mostly full of idiocy. But if you split it to smaller chambers for 10 people each, some of those subchambers may actually be interesting.
So I decide to expand my bubble by including this specific person, and stuff recommended by them.
Yeah, that’s the kind of thing I’m thinking of.
If you allow ideological self tagging, you could also let the Wisdom of the Idiots pick their champion. One automatic method is upping the people someone responds to.
There are a lot of simple options that would go a long way, particularly since right now you’re lucky to get thumbs up, thumbs down. The Web 5 Zillion is really pitiful in this regard.
perhaps by clicking a button: “I feel adventurous today, show me a random article outside of my bubble”
Well, my first association is with the scene in Stephenson’s Snowcrash where Hiro meets a few people from the New South Africa Franchulate #153...X-)
For this solution to work
For this solution to work we need at least two things to happen on a regular basis:
People will click the “show me something written by weirdos” button
People’s reaction will be something other than “Freaks! Degenerates! Spawn of evil! KILL’EM ALL!!!”
maybe the problem of echo chambers today is that they are too big
I think the problem of echo chambers is that they exist. They are not an unmitigated disaster, of course—basically everyone curates their own experience and that’s normal—but the words “echo chamber” imply that the balance has tilted too far towards the “comfortably numb” side and away from the “new and could change something” side.
In defense of echo chambers: Imagine what would happen if we tried to minimize their existence.
The ultimate anti-bubble internet would be like this: It would show you a random page. Then you could click “Next” and it would show you another random page. (Or perhaps the pages would be changed automatically, to prevent you from spending too much time looking at the page you agree with, and skipping the pages you disagree with.) That’s all. There would be no way to send someone a page, or even to bookmark it for the future you, because even that would give you a tool to increase the fraction of time spent reading pages you agree with.
I am sure there are people who could defend this kind of internet. (Especially if it would exist, so they would be defending status quo instead of a crazy thought experiment.) Yeah, you probably couldn’t even read half of the content you would look at, because it would be written in a language you don’t understand… but that’s awesome because it allows you to look out from your cultural bubble, and motivates you to learn new languages. Etc.
But I think most of us would agree that such internet would be a horrible thing. We want to have a choice. We want the opportunity to read a LessWrong debate instead of having to read only random articles (even if they are in the languages we speak). Okay, wanting does not necessarily mean that something is better, but… I think we would also agree that the ultimate anti-bubble internet would be worse than what we have now.
So it seems like there is a scale going from “no bubbles” to “perfect bubbles”, both extremes seem horrible, and… how do we find the optimal point? (I mean, using some other method than defending status quo.)
I think we would also agree that the ultimate anti-bubble internet would be worse than what we have now.
Well, duh.
So it seems like there is a scale going from “no bubbles” to “perfect bubbles”
Kinda. The “no bubbles” extreme means you are forced to absorb information regardless of your preferences, Clockwork Orange-style if you actually want to go to the extreme. A more common example would be school: you are forced to learn (well, to be exposed to) a set of subjects and no one cares whether you are interested in them or not.
The “perfect bubble” end actually looks like what some people at contemporary US colleges would like to construct (see e.g. this). You are actively protected from anything that might upset you—or make you change your mind.
If you want to find the optimal point on the spectrum between the two extremes, a good starting point would be to specify what are you optimizing for. Optimal according to which criteria?
I think most people would prefer to see only the stuff they already agree with, and maybe once in a while a heresy they can easily defeat. On the other hand, they want unbelievers to be more exposed to alternative opinions.
Even when people try to suggest the same rules for everyone, I suspect they prefer such rules that would make their opinion win. If they believe their opinion would win in a free marketplace of ideas (or at least that it would lose without such marketplace), they will defend free speech for everyone. On the other hand if too much freedom gives an advantage to competing memes, they will find an excuse why free speech has to be limited in this specific aspect. Etc.
So I guess that most people would prefer the status quo with slightly better filtering options for themselves, and slightly more exposure to alternative views for others.
On some level I find it hypocritical to complain about those colleges where students are protected against the microaggressions of alternative opinions, when my own desire is to be sheltered from idiocy of people around me. Technically speaking, the set of rationalists is so much smaller than the set of politically correct people, that even if I don’t desire 100% filtering out of the rest of the world and they do, my filter is still stronger than theirs in some mathematical sense. (I cannot realistically imagine a whole college full of rationalists. But if such thing is possible, I would probably never want to leave that place.)
We’re talking solely about the desirable degree of filtering. No one, including me, argues that people should just not filter their information input—their news, their forums, their discussions, etc.
It’s like I say that we shouldn’t encourage paranoid tendencies and you’re saying that if you imagine the inverse—everyone is forced to trust complete strangers all the time—it will be horrible. Of course it will be horrible. However this is not a valid counterargument to “we shouldn’t encourage paranoid tendencies”.
Filtering is normal. Filtering is desirable. Everyone filters. But.
Too much of pretty much any normal and desirable activity leads to problems. If the environment (due, say, to shifts in technology) changes so that it become very very easy to overdo that normal and desirable activity, there will be issues. The so-called diseases of civilization would be an example: e.g. the desire to stuff your face full of superstimulus food is entirely normal and desirable (we tend to diagnose people without this desire with eating disorders). But if that superstimulus food becomes easily and cheaply available, well, there are issues.
It may or may not be a problem, depending on how people set their filters. People choose.
If you don’t want to hear from the other team, you don’t. If you do, you filter accordingly. If the people whose judgment you include in your filters want to listen to the other team, you get fed some of the other team, as they filter the other team.
Plenty of people want to live in a echo chamber. Let them self segregate and get out of the way of the grown ups who want to talk.
That solves the Ministry of Truth problem, but that doesn’t solve the Set of Echo Chambers problem.
In some ways Google and Facebook already create echo chambers as they personalize your search results or your feed trying to maximize “user engagement”.
True, and moreover the internet itself promotes the creation of echo chambers because it’s very easy to choose only those sources of information which confirm your existing views and beliefs.
I think it’s a pretty serious problem and one that we should try to not make any worse.
Maybe we could somehow combine the creation of echo chambers with exploration.
I do not have a specific solution, but here is what I am trying to achieve: Suppose that I have a political view X, and I oppose a political view Y. Let’s say that I intolerantly believe that most people from Y are idiots. Then, at some moment, somehow (perhaps by clicking a button: “I feel adventurous today, show me a random article outside of my bubble”), I find one person with a view Y who seems smart. I still believe the others are idiots, but this one is an exception. So I decide to expland my bubble by including this specific person, and stuff recommended by them.
For this solution to work, we need a good scoring algorithm. Because I only respect one specific person from the group Y, but dislike most of the others. So if the system will give me articles recommended by this specific person, I may enjoy them, but if it just give me articles recommended by people from group Y generally, I will dislike them. So the ability to build very specific bubble is necessary to have the ability to explore other people’s bubbles.
My intuition is that a bubble made for one person is better than a bubble made for a group. And even better would be if the algorithm could recognize different “factors” within that person. For example, there would be a bubble of information optimized for me, but someone could choose that they want to see my favorite articles about rationality, but not my favorite articles about programming; and the system could classify them correctly. Analogically, there could be a person that is really good at some topic, but completely mindkilled about some other topic, so I would decide to follow them only about that one topic.
This is just a hypothesis without good suppport, but maybe the problem of echo chambers today is that they are too big. If you create a chamber for 100 people, there may be 20 loud idiots, and the whole chamber will be mostly full of idiocy. But if you split it to smaller chambers for 10 people each, some of those subchambers may actually be interesting.
Yeah, that’s the kind of thing I’m thinking of.
If you allow ideological self tagging, you could also let the Wisdom of the Idiots pick their champion. One automatic method is upping the people someone responds to.
There are a lot of simple options that would go a long way, particularly since right now you’re lucky to get thumbs up, thumbs down. The Web 5 Zillion is really pitiful in this regard.
Well, my first association is with the scene in Stephenson’s Snowcrash where Hiro meets a few people from the New South Africa Franchulate #153...X-)
For this solution to work we need at least two things to happen on a regular basis:
People will click the “show me something written by weirdos” button
People’s reaction will be something other than “Freaks! Degenerates! Spawn of evil! KILL’EM ALL!!!”
I think the problem of echo chambers is that they exist. They are not an unmitigated disaster, of course—basically everyone curates their own experience and that’s normal—but the words “echo chamber” imply that the balance has tilted too far towards the “comfortably numb” side and away from the “new and could change something” side.
In defense of echo chambers: Imagine what would happen if we tried to minimize their existence.
The ultimate anti-bubble internet would be like this: It would show you a random page. Then you could click “Next” and it would show you another random page. (Or perhaps the pages would be changed automatically, to prevent you from spending too much time looking at the page you agree with, and skipping the pages you disagree with.) That’s all. There would be no way to send someone a page, or even to bookmark it for the future you, because even that would give you a tool to increase the fraction of time spent reading pages you agree with.
I am sure there are people who could defend this kind of internet. (Especially if it would exist, so they would be defending status quo instead of a crazy thought experiment.) Yeah, you probably couldn’t even read half of the content you would look at, because it would be written in a language you don’t understand… but that’s awesome because it allows you to look out from your cultural bubble, and motivates you to learn new languages. Etc.
But I think most of us would agree that such internet would be a horrible thing. We want to have a choice. We want the opportunity to read a LessWrong debate instead of having to read only random articles (even if they are in the languages we speak). Okay, wanting does not necessarily mean that something is better, but… I think we would also agree that the ultimate anti-bubble internet would be worse than what we have now.
So it seems like there is a scale going from “no bubbles” to “perfect bubbles”, both extremes seem horrible, and… how do we find the optimal point? (I mean, using some other method than defending status quo.)
Well, duh.
Kinda. The “no bubbles” extreme means you are forced to absorb information regardless of your preferences, Clockwork Orange-style if you actually want to go to the extreme. A more common example would be school: you are forced to learn (well, to be exposed to) a set of subjects and no one cares whether you are interested in them or not.
The “perfect bubble” end actually looks like what some people at contemporary US colleges would like to construct (see e.g. this). You are actively protected from anything that might upset you—or make you change your mind.
If you want to find the optimal point on the spectrum between the two extremes, a good starting point would be to specify what are you optimizing for. Optimal according to which criteria?
I think most people would prefer to see only the stuff they already agree with, and maybe once in a while a heresy they can easily defeat. On the other hand, they want unbelievers to be more exposed to alternative opinions.
Even when people try to suggest the same rules for everyone, I suspect they prefer such rules that would make their opinion win. If they believe their opinion would win in a free marketplace of ideas (or at least that it would lose without such marketplace), they will defend free speech for everyone. On the other hand if too much freedom gives an advantage to competing memes, they will find an excuse why free speech has to be limited in this specific aspect. Etc.
So I guess that most people would prefer the status quo with slightly better filtering options for themselves, and slightly more exposure to alternative views for others.
On some level I find it hypocritical to complain about those colleges where students are protected against the microaggressions of alternative opinions, when my own desire is to be sheltered from idiocy of people around me. Technically speaking, the set of rationalists is so much smaller than the set of politically correct people, that even if I don’t desire 100% filtering out of the rest of the world and they do, my filter is still stronger than theirs in some mathematical sense. (I cannot realistically imagine a whole college full of rationalists. But if such thing is possible, I would probably never want to leave that place.)
We’re talking solely about the desirable degree of filtering. No one, including me, argues that people should just not filter their information input—their news, their forums, their discussions, etc.
It’s like I say that we shouldn’t encourage paranoid tendencies and you’re saying that if you imagine the inverse—everyone is forced to trust complete strangers all the time—it will be horrible. Of course it will be horrible. However this is not a valid counterargument to “we shouldn’t encourage paranoid tendencies”.
Filtering is normal. Filtering is desirable. Everyone filters. But.
Too much of pretty much any normal and desirable activity leads to problems. If the environment (due, say, to shifts in technology) changes so that it become very very easy to overdo that normal and desirable activity, there will be issues. The so-called diseases of civilization would be an example: e.g. the desire to stuff your face full of superstimulus food is entirely normal and desirable (we tend to diagnose people without this desire with eating disorders). But if that superstimulus food becomes easily and cheaply available, well, there are issues.
It may or may not be a problem, depending on how people set their filters. People choose.
If you don’t want to hear from the other team, you don’t. If you do, you filter accordingly. If the people whose judgment you include in your filters want to listen to the other team, you get fed some of the other team, as they filter the other team.
Plenty of people want to live in a echo chamber. Let them self segregate and get out of the way of the grown ups who want to talk.
History tells me that they will show up with torches and pitchforks outside my door soon enough...