This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn’t call it “safe”. These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.
All of our search results come filtered through google’s algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what’s on the web, and we’re scarcely even conscious that the filter bubble exists. If you don’t know about sampling bias, how you can you correct for it?
With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we’ll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I’m sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness—their own personal problems?
Google’s continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you’re typing something into the search bar, google autocompletes—changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly—but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.
And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you’re a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful—introduce too much randomness and the quality decreases rapidly.)
I guess I’m just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone’s mobile devices to understand what they’re saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it’s hard for the computer to understand?
I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you’ll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It’s automatic. In a CS crowd, I’ll use CS metaphors; in a non-CS crowd I won’t. So I’m not opposed to changing the way I speak based on the context. I’m just specifically worried about the sort of speaking patterns NLP systems will force us into. I’m afraid they’ll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that’s what the system has gotten the most data on, and can respond best to).
Ok, I’m done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don’t think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.
Its suggestions are often quite good, and they make the system run more smoothly
.. and occasionally, they instead have direct implications of perception-filtering. Altering my query because you couldn’t match a term, and not putting this fact in glaring huge red print, leads me to think there are actual results here, rather than a selection of semi-irrelevance. Automatically changing my search terms is similar in effect—no, I don’t care about ‘pick’, I’m searching for ‘gpick’!.
This is worse than mere suggestions ;)
I can notice these things, but I also wonder whether the Google Glass users would have their availability-heuristic become even more skewed by these kinds of misleading behaviours. I wonder whether mine is.
When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we’re only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you’ll like, as opposed to what you should see.
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.
Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.
Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).
The difficulty, if I’m understanding correctly, is not that I won’t learn new things, but that I won’t learn uncontrolled new things… that I’ll be able to choose what I will and won’t learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.
Is this optimal? Probably not. But I suspect it’s an improvement over the situation most people are in right now.
This is a community of intellectuals who love learning, and who aren’t afraid of controversy. So for us, it wouldn’t be a disaster. But I think we’re a minority, and a lot of people will only see what they specifically want to see and won’t learn very much on a regular basis.
Sure, I agree. But that’s true today, too. Some people choose to live in echo chambers, etc. Heck, some people are raised in echo chambers without ever choosing to live there.
If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it’s good; if it makes me less likely, it’s bad. (All else being equal, etc.)
What I’m not convinced of is that increasing our control over what we can learn will result in less learning.
That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.
What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view?
I think most people don’t like the idea of shutting down their own perception in this way. Having people go invisible to yourself feels like you lose control over your reality.
I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It’s automatic.
This means that humans are quite adaptable and can speak differently to the computer than they speak to their fellow humans.
I mean with parent with with their 3 year old toddler the same way they speak on the job? The computer is just an additional audience.
This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn’t call it “safe”. These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.
All of our search results come filtered through google’s algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what’s on the web, and we’re scarcely even conscious that the filter bubble exists. If you don’t know about sampling bias, how you can you correct for it?
With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we’ll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I’m sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness—their own personal problems?
Google’s continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you’re typing something into the search bar, google autocompletes—changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly—but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.
And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you’re a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful—introduce too much randomness and the quality decreases rapidly.)
I guess I’m just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone’s mobile devices to understand what they’re saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it’s hard for the computer to understand?
I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you’ll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It’s automatic. In a CS crowd, I’ll use CS metaphors; in a non-CS crowd I won’t. So I’m not opposed to changing the way I speak based on the context. I’m just specifically worried about the sort of speaking patterns NLP systems will force us into. I’m afraid they’ll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that’s what the system has gotten the most data on, and can respond best to).
Ok, I’m done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don’t think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.
.. and occasionally, they instead have direct implications of perception-filtering. Altering my query because you couldn’t match a term, and not putting this fact in glaring huge red print, leads me to think there are actual results here, rather than a selection of semi-irrelevance. Automatically changing my search terms is similar in effect—no, I don’t care about ‘pick’, I’m searching for ‘gpick’!.
This is worse than mere suggestions ;)
I can notice these things, but I also wonder whether the Google Glass users would have their availability-heuristic become even more skewed by these kinds of misleading behaviours. I wonder whether mine is.
How much this is true is up for quite a bit of debate. Sapir-Whorf hypothesis and whatnot.
A post from the sequences that jumps to mind is Interpersonal Entanglement:
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we’re only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you’ll like, as opposed to what you should see.
Seems like a step in the wrong direction.
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.
Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.
Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person’s best interest. And it will depend on how people use it.
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).
The difficulty, if I’m understanding correctly, is not that I won’t learn new things, but that I won’t learn uncontrolled new things… that I’ll be able to choose what I will and won’t learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.
Is this optimal? Probably not. But I suspect it’s an improvement over the situation most people are in right now.
This is a community of intellectuals who love learning, and who aren’t afraid of controversy. So for us, it wouldn’t be a disaster. But I think we’re a minority, and a lot of people will only see what they specifically want to see and won’t learn very much on a regular basis.
Sure, I agree.
But that’s true today, too. Some people choose to live in echo chambers, etc.
Heck, some people are raised in echo chambers without ever choosing to live there.
If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it’s good; if it makes me less likely, it’s bad. (All else being equal, etc.)
What I’m not convinced of is that increasing our control over what we can learn will result in less learning.
That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.
I think most people don’t like the idea of shutting down their own perception in this way. Having people go invisible to yourself feels like you lose control over your reality.
This means that humans are quite adaptable and can speak differently to the computer than they speak to their fellow humans.
I mean with parent with with their 3 year old toddler the same way they speak on the job? The computer is just an additional audience.