When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we’re only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you’ll like, as opposed to what you should see.
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.
Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.
Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).
The difficulty, if I’m understanding correctly, is not that I won’t learn new things, but that I won’t learn uncontrolled new things… that I’ll be able to choose what I will and won’t learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.
Is this optimal? Probably not. But I suspect it’s an improvement over the situation most people are in right now.
This is a community of intellectuals who love learning, and who aren’t afraid of controversy. So for us, it wouldn’t be a disaster. But I think we’re a minority, and a lot of people will only see what they specifically want to see and won’t learn very much on a regular basis.
Sure, I agree. But that’s true today, too. Some people choose to live in echo chambers, etc. Heck, some people are raised in echo chambers without ever choosing to live there.
If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it’s good; if it makes me less likely, it’s bad. (All else being equal, etc.)
What I’m not convinced of is that increasing our control over what we can learn will result in less learning.
That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.
A post from the sequences that jumps to mind is Interpersonal Entanglement:
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we’re only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you’ll like, as opposed to what you should see.
Seems like a step in the wrong direction.
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.
Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.
Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).
The difficulty, if I’m understanding correctly, is not that I won’t learn new things, but that I won’t learn uncontrolled new things… that I’ll be able to choose what I will and won’t learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.
Is this optimal? Probably not. But I suspect it’s an improvement over the situation most people are in right now.
This is a community of intellectuals who love learning, and who aren’t afraid of controversy. So for us, it wouldn’t be a disaster. But I think we’re a minority, and a lot of people will only see what they specifically want to see and won’t learn very much on a regular basis.
Sure, I agree.
But that’s true today, too. Some people choose to live in echo chambers, etc.
Heck, some people are raised in echo chambers without ever choosing to live there.
If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it’s good; if it makes me less likely, it’s bad. (All else being equal, etc.)
What I’m not convinced of is that increasing our control over what we can learn will result in less learning.
That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.