Is that really just it? Is there no special sanity to add, but only ordinary madness to take away?
I think this is the primary factor. I’ve got a pretty amusing story about this.
Last week I met a relatively distant relative, a 15 year old guy who’s in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about “robots”, and in a shortly unfolding 15 minute spontaneous conversation I’ve managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I’m fairly sure that he had no previous knowledge about the subject.
First I made a remark in connection to the TV program’s poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.
Then he asked if I think they’ll get that smart, and I answered that it’s quite possible in this century. I explained recursive self-improvement in two sentences, to illustrate the reasons why they could potentially get very, very smart in a small amount of time. I talked about the technology that would probably allow AIs to act upon the world with great efficiency and power. Next, he said something like “that’s good, wouldn’t AI’s would be a big help, like, they will invent new medicine?” At this point I was pretty amused. I assured him that AIs indeed have great potentials. I talked then very shortly about most basic AI topics, providing the usual illustrations like Hollywood AIs, smiley-tiled solar systems and foolish programmers overlooking the complexity of value. I delineated CEV in a simplified “redux” manner, focusing on the idea that we should optimally just extract all relevant information from human brains by scanning them, to make sure nothing we care about is left out. “That should be a huge technical problem, to scan that much brains”, he said.
And now:
“But if the AI gets so potent, would not it be a problem anyway, even if it’s perfectly friendly, that it can do everything much better than humans, and we’ll get bored?”
“Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It’ll be taken into account by the AI, and it will make sure it’ll not pamper you excessively.”
“Ah, that sounds pretty reasonable”.
Now, all of this happened in the course of roughly 15 minutes. No absurdity heuristic, no getting lost, no objections; he just took everything I said at face value, assuming that I’m more knowledgeable on these matters, and I was in general convinced that nothing I explained was particularly hard to grasp. He asked relevant questions and was very interested in what I said.
Some thoughts why this was possible:
The guy belongs to a certain social strata in Hungary, namely to those who newly entered the middle class by free entrepreneurship that became a possibility after the country switched to capitalism. At first, the socialist regime repressed religion and just about every human rights, then eased up, softened, and became what’s known as the “happiest barrack”. People became unconcerned with politics (which they could not influence) and religion (which was though of as a highly personal matter that should not be taken to public), they just focused on their own wealth and well-being. I’m convinced that the parents of the guy care zero about any religion, the absence of religion, doctrine, ideology or whatever. They just work to make a living and don’t think about lofty matters, leaving their son ideologically perfectly intact. Just like my own parents.
Actually, AI is not intrinsically abstract or hard to digest; my interlocutor knew what an AI is, even if from movies, and probably watched just enough Discovery to have a sketchy picture about future technologies. The mind design space argument is not that hard (he had known about evolution because it’s taught in school. He immediately agreed that AIs can be much smarter than humans because if we wait a million years, maybe humans can also become much smarter, so it’s technically possible), and the smiley-tiled solar system is an entertaining and effective explanation about morality. I think that Eliezer has put extreme amounts of effort to maximize the chance that his AI ideas will get transmitted even to people who are primed or biased against AI or at risk of motivated skepticism. So far, I’ve had great success using his parables, analogues and ways of explanation.
My perceived status as an “intellectual” made him accept my explanations at face value. He’s a football player in a smallish countryside city and I’m a serious college student in the capital city (it’s good he doesn’t know how lousy a student I am). Still, I do not think this was a significant factor. He probably does not talk about AI among football players, but being a male he has some basic interests in futuristic or gadgety subjects.
In the end, it probably all comes down to lacking some specific ways of craziness. Cryonics seemed normal on that convention Eliezer attended, and I’m sure every idea that is epistemically and morally correct can in principle be a so-called normal thing. Besides this guy, I’ve even had full success lecturing a 17 year old metal drummer on AI and SIAI—and he was situated socioeconomically very similarly to the first guy, and neither he had any previous knowledge.
This is a great post, and I’d be interested in seeing you write out a fuller version of what you said to your relative as a top level post, something like “Friendly AI and the Singularity explained for adolescents.”
Also, do you speak English as a second language? If so, I am especially impressed with your writing ability.
On a tangent, am I the only one that doesn’t like the usage of boy, girl, or child to describe adolescents? It seems demeaning, because adolescents are not biologically children, they’ve just been defined to be children by the state. I suppose I’m never going to overturn that usage, but I’d like to know if there is some reason why I shouldn’t be bothered by the common usage of the words for children.
Yes, English is second language for me and I mostly learned it via reading things on the Internet.
Excuse me for the boy/guy confusion, I did not have any particular intent behind the wording. It was an unconscious application of my native language’s tendency to refer to <18 year old males with the “boy” equivalent word. As I’m mostly a lurker I have much less writing than reading experience; currently I usually make dozens of spelling/formulation corrections on longer posts, but some weirdly used words or mistakes are guaranteed to remain in the text.
“Child” is probably never OK for people older than 12-13, but “girl”, “guy”, and occasionally “boy” are usually used by teens, and often by 20-somethings to describe themselves or each other. (“Boy” usually by females, used with a sexual connotation.)
I’m aware of it, and am actually still getting into the habit of referring to women about my age or younger as women rather than girls. I still trip over it when other people use the words that way, though—I automatically think of 8-year-olds if it’s not very clear who’s being referred to.
I automatically think of 8-year-olds if it’s not very clear who’s being referred to.
Right. “Girl” really has at least two distinct senses, one for children and one for peers/juniors of many ages. “Guy” isn’t used in the first sense, and the second sense of “boy” is more restricted. The first sense of “boy”/”girl” is the most salient one, and thus the default absent further context. I don’t think the first sense needs to poison the second one. But its use in the parent comment this discussion wasn’t all that innocent. (I’ve been attacked before, by a rather extreme feminist, for using it innocently.)
“Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It’ll be taken into account by the AI, and it will make sure it’ll not pamper you excessively.”
But wouldn’t the knowledge that the AI could potentially do your work be psychologically harmful?
When you play an engaging computer game, does it detract from your experience knowing that all the tasks you are performing are only there for your pleasure, and that the developers could have easily just made you click an “I Win” button without requiring you to do anything else?
I suspect that status effects might be important here. When we play a video game, we choose to do it voluntarily, and so the developers are providing us a service. But if the universe is controlled by an AI, and we have no choice but to play games that it provides us, then it would feel more like being a pet.
The AI could also try to take that into account, I suppose, but I’m not sure what it could do to alleviate the problem without lying to us.
If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
It’s not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I’m really making decisions for two agents: the real me, and the one in FAI’s prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It seems to me that free will works rather differently… sort of like you’re in a Newcomb’s Problem that never ends.
For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It just means that you were mistaken and PL2.0 doesn’t actually allow you to murder. It’s physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.
That’s not necessarily true. We might still have to build a sturdy bridge to cross a river, it’s just that nobody dies if we mess up.
Likewise, if one’s mind is too advanced for bridge building to not be boring, then there will be other more complex organizations we would want, which the FAI is under no obligation to hand us.
I think we can have a huge set of real problems to solve, even after FAI solves all the needed ones.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient? When you’re building a bridge in that situation, you’re not solving the problem of crossing a river, you’re just using up resources in order to not be bored.
If you’re 16 and your parents refuse to buy something for you (that they could afford without too much trouble) and instead make you go out and earn the money to buy it yourself, was solving the problem of how to get the money “just a game”?
What does it mean for FAI to bear civilization? It can give us bridges, but if I’m going to spend time with you, you’d better be socialized. A life of obedient catgirls would harm your ability to deal with real humans (or posthumans)
And ignoring that, I don’t think that we want to be more than we are just in order to get stuff done. Both of these are things we to to achieve complex values. Some of the things we want are things which can’t be handed to us, and some of those are thing which we can’t achieve if everything which can be handed to us, is handed to us.
The companions FAI creates for you don’t have to be obedient, nor catgirls. Instead, they can be companions that far exceed the value you can get from socializing with fellow humans or posthumans.
Once there is FAI, the best companion for anyone is FAI.
The only reason you want “complex values” is because your environment has inculcated in you that you want them. The reason your environment has inculcated this in you is because such inculcation is necessary in order to have people who will uphold civilization. Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
Once there is FAI, the best companion for anyone is FAI.
How rude can I be to my FAI companion before it starts crying in the corner? How rude will I become if it doesn’t? Why didn’t it just build the bridge the first time I asked? then I wouldn’t have to yell. Does she mind that I call her ‘it’?
Proper companions don’t always give you what you want.
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
Edit: Newline:
Ok, this is a big deal:
The fact that a value I have is something evolution gave me is not a reason to abandon that value. Pleasure is also something I want because evolution made me want it.
Right now, I want those complex values, and I’m not going to press a button to self modify to stop wanting them
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
I don’t see why creating perfectly balanced agents would be morally reprehensible—nor why, given such agents, there would be value in interacting with other humans—necessarily less suited to each other’s progress than the agents would be.
It may well be considered morally reprehensible to communicate with other humans, because it may undermine and slow down the personal development that each human would otherwise benefit from in the company of custom-tailored companions, designed perfectly for one’s individual progress.
It may well be morally better for the FAI to make you think that you’re communicating with a ‘real’ human, when in fact you are communicating with an agent specifically designed to provide you with that learning experience.
If these agents are people in a morally significant way, then their needs must be taken into account. FAI can’t just create slave beings. It’s very difficult for me at this point to say whether it’s possible for the FAI to create a being that perfectly meets some human needs, and in turn has all its own needs met just as perfectly. Every new person it creates just adds more complexity to the moral balance. It might be doable, but it might not, and it’s a lot more work-thought-energy to do it that way.
If they are not people, if they are some kind of puppet zombie robot, then we will have billions of humans falling in love with puppet zombie robots. Because that is their only option. And having puppet zombie robot children. Maybe that’s what FAI will conclude is best, but I doubt it.
I actually think that all our current ways of thinking, feeling and going about life would be so antiquated, post-FAI, as a horse buggy on an interstate highway. Once an AI can reforge us into more exalted creatures than we currently are, I’m not sure why anyone would want to continue living (falling in love? having children?) the old fashioned way. It would be as antiquated as the lifestyle of the Amish.
It would be as antiquated as the lifestyle of the Amish.
Some people want to be Amish. It seems like your statement could just as well be “I’m not sure why anyone would want to be Amish” and I’m not sure that communicates anything useful.
On the one hand, as long as there are sufficient resources for some people to engage in Amish-like living while not depriving everyone else, that could be okay.
On the other hand, if the AI determines that a different way of being is much preferable to insistance on human traditions, then it has its infinite intelligence at its disposal to convince people to go along for the ride.
If the AI is barred both from modifying people or from using its intelligence to convince them, then still, at one point, resources become scarce, and for the benefit of everyone, the resource consumption of the refuseniks has to be optimized. I can envision a (to them) seamless transition where they continue living an Amish-like lifestyle in a simulation.
What would we want to be exalted for? So we can more completely appreciate our boredom?
It doesn’t make sense to me that we’d get some arbitrary jump in mindpower, and then start an optimized advancement. (we might get some immediate patches, but there will be reasons for them.) Why not pump us all the way to multi-galaxy-brains? Then the growth issues are moot.
Either way, if we’re abandoning our complex evolved values, then we don’t need to be very complex beings at all. If we don’t, then I don’t expect that even our posthuman values will be satisfied by puppet zombie companions.
Is there some reason to believe our current degree of complexity is optimal?
Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer?
If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
Is there some reason to believe our current degree of complexity is optimal?
I don’t, at all. Also you’re conflating our complexity with the complexity of our values.
I think that our growth will best start from a point relatively close to where we are now in terms of intelligence. We should grow into jupiter brains, but that should be by learning.
I’m not clear on what it is you want to be reforged as, or why. By what measure is postFAI-Dennis better than now-dennis? By what measure is it still ‘Dennis’, and why were those features retained?
The complexity of human value is not good for its being complex. Rather, these are the things we value, there happens to be a lot of them and they are complexly interrelated. Chopping away at huge chunks of them and focusing on pleasure is probablya bad thing, which we would not want.
It may be the case that the FAI will extrapolate much more complex values, or much simpler values, but our current values must be the starting point and our current values are complex.
The only reason we want that is that civilization would collapse without anyone to bear it. If FAI bears it, there is no pressure on anyone.
This is an extreme statement about everyone’s preference, not even your own preference or your own belief about your own preference. One shouldn’t jump that far.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient?
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
By that reasoning, World of Warcraft is not a game because the admins can’t make me level 80 on day 1, because that’s not what their preferences tell them to do… Or am I missing your point?
I’m attacking a specific argument that “FAI could just flick a switch”. Whether it moves your conclusion about the described situation being a game depends on how genuine your argument for it being a game was and on how much you accept my counter-argument.
Could one of you précis the disagreement in a little more detail and with background? When you and Wei Dai disagree, I’d really like to understand the discussion better, but the discussion it sprang out of doesn’t seem all that enlightening—thanks!
I originally said that post-FAI, we’d have no real problems to solve, so everything we do would be like playing games, and we’d take a status hit because of that. Nesov allegedly found a way to recast the situation so that we can avoid taking the status hit, but I remain unconvinced. I admit this is one of our more trivial discussions. :)
I originally didn’t bother to do so explicitly, only wrote this reply that seems to have not been understood, but in light of Eliezer’s post about flow of the argument, I’ll recast the structure I see in the last few comments:
Wei: Bridge-building is a game, because FAI could just flick a switch. (Y leads to X having property S; Y=”could flick a switch”, X=”FAI’s world”, S=”is a game”) Vlad: No it couldn’t, its preference (for us having to make an effort) makes it impossible for that to happen. (Y doesn’t hold for X) Wei: But there are games where players don’t get free charity as well. (Z have property S without needing Y) Vlad: I’m merely saying that Y doesn’t hold, so if Y held any weight in the argument that “Y leads to X having property S”, then having established not-Y, I’ve weakened the support for X having property S, and at least refuted the particular argument for X having property S, even if I haven’t convincingly argued that X doesn’t have property S overall.
When I wrote “Bridge-building is a game, because FAI could just flick a switch” the intended meaning of “could” was “could if it wanted to”. When I cited WoW later, I was trying to point out that your interpretation of “could” as “could given its actual preferences” can’t be what I intended because it would rule out WoW as a game. I guess I failed to get my point across, and then thought the argument was too inconsequential to continue. But now that you’re using it as an example, I want to clear up what happened.
Is this a disagreement that is more about the meaning of words than anything else? I think you are Nesov are disagreeing about the meanings of game and real problems or maybe problems. Both of you defining those terms would help.
In the short term, I think you are correct. However, in the long term, I’m hoping that the FAI will find a non-disastrous way for us to become superintelligent ourselves, and therefore again be able to participate in solving real problems.
When I build a bridge in a game, I get an in-game reward. I don’t get easier transport to anywhere. If I neglect to build the bridge or play the game at all, I still get to use all the bridges otherwise available to me. ‘Real’ bridges are at the top level of reality available to me. Even the simulation hypothesis does not make these bridges a game.
Why do I want to cross the bridge? To not be bored, to find my love, or to meet some other human value. The AI could do that for me too, and cut out the need for transport. If we follow that logic even a short way, it would be obvious that we don’t want the AI doing certain things for us. If there is danger of us being harmed because the FAI could help but won’t it need merely help a little more, getting closer to those things we want to do ourselves. If we’re in danger of being harmed by our own laziness, it need only back off. (It might do this at the level of the entires species, for all time, so individuals might be bored or angry or not cross rivers as soon as they would like, but it might optimize for everybody moment to moment.)
If there are things we couldn’t stand to have a machine do, and couldn’t stand for it to not help us with, I think those would be incoherent volitions.
One way I imagine that would work for me is if the AI explained with sufficient persuasion that there simply isn’t anything more meaningful for me to do than to play games. If there actually is something more meaningful for people to do, then the AI should probably let people do that.
It doesn’t matter if it does or not; the fact that you can conceive of situations where persuadability would fail as a criterion immediately means it fails.
This is a category error. Meaningfulness is in your mind and in intersubjective constructions, not in the objective world. There is no fact of the matter for the AI to explain to you.
I had essentially this conversation with my sister-in-law’s boyfriend (Canadian art student in his early twenties) just about four weeks ago. Didn’t get to the boredom question, but did talk a bit about cryonics. Took about 25 minutes.
There seem to be two ways for the AI thing to click. Some people click and go “Oh yeah, that makes sense,” and then if you ask them about it they’ll tell you they believe it’s a problem, but they won’t change their behavior very much otherwise. The other people click and go, “0_0 Wtf am I doing with my life???” and then they move to the Bay Area or New York and join the other people devoting their every resource to preventing paperclip maximizers and the like. Which type were your people, and what do you think causes the difference?
I think this is the primary factor. I’ve got a pretty amusing story about this.
Last week I met a relatively distant relative, a 15 year old guy who’s in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about “robots”, and in a shortly unfolding 15 minute spontaneous conversation I’ve managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I’m fairly sure that he had no previous knowledge about the subject.
First I made a remark in connection to the TV program’s poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.
Then he asked if I think they’ll get that smart, and I answered that it’s quite possible in this century. I explained recursive self-improvement in two sentences, to illustrate the reasons why they could potentially get very, very smart in a small amount of time. I talked about the technology that would probably allow AIs to act upon the world with great efficiency and power. Next, he said something like “that’s good, wouldn’t AI’s would be a big help, like, they will invent new medicine?” At this point I was pretty amused. I assured him that AIs indeed have great potentials. I talked then very shortly about most basic AI topics, providing the usual illustrations like Hollywood AIs, smiley-tiled solar systems and foolish programmers overlooking the complexity of value. I delineated CEV in a simplified “redux” manner, focusing on the idea that we should optimally just extract all relevant information from human brains by scanning them, to make sure nothing we care about is left out. “That should be a huge technical problem, to scan that much brains”, he said.
And now:
“But if the AI gets so potent, would not it be a problem anyway, even if it’s perfectly friendly, that it can do everything much better than humans, and we’ll get bored?”
“Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It’ll be taken into account by the AI, and it will make sure it’ll not pamper you excessively.”
“Ah, that sounds pretty reasonable”.
Now, all of this happened in the course of roughly 15 minutes. No absurdity heuristic, no getting lost, no objections; he just took everything I said at face value, assuming that I’m more knowledgeable on these matters, and I was in general convinced that nothing I explained was particularly hard to grasp. He asked relevant questions and was very interested in what I said.
Some thoughts why this was possible:
The guy belongs to a certain social strata in Hungary, namely to those who newly entered the middle class by free entrepreneurship that became a possibility after the country switched to capitalism. At first, the socialist regime repressed religion and just about every human rights, then eased up, softened, and became what’s known as the “happiest barrack”. People became unconcerned with politics (which they could not influence) and religion (which was though of as a highly personal matter that should not be taken to public), they just focused on their own wealth and well-being. I’m convinced that the parents of the guy care zero about any religion, the absence of religion, doctrine, ideology or whatever. They just work to make a living and don’t think about lofty matters, leaving their son ideologically perfectly intact. Just like my own parents.
Actually, AI is not intrinsically abstract or hard to digest; my interlocutor knew what an AI is, even if from movies, and probably watched just enough Discovery to have a sketchy picture about future technologies. The mind design space argument is not that hard (he had known about evolution because it’s taught in school. He immediately agreed that AIs can be much smarter than humans because if we wait a million years, maybe humans can also become much smarter, so it’s technically possible), and the smiley-tiled solar system is an entertaining and effective explanation about morality. I think that Eliezer has put extreme amounts of effort to maximize the chance that his AI ideas will get transmitted even to people who are primed or biased against AI or at risk of motivated skepticism. So far, I’ve had great success using his parables, analogues and ways of explanation.
My perceived status as an “intellectual” made him accept my explanations at face value. He’s a football player in a smallish countryside city and I’m a serious college student in the capital city (it’s good he doesn’t know how lousy a student I am). Still, I do not think this was a significant factor. He probably does not talk about AI among football players, but being a male he has some basic interests in futuristic or gadgety subjects.
In the end, it probably all comes down to lacking some specific ways of craziness. Cryonics seemed normal on that convention Eliezer attended, and I’m sure every idea that is epistemically and morally correct can in principle be a so-called normal thing. Besides this guy, I’ve even had full success lecturing a 17 year old metal drummer on AI and SIAI—and he was situated socioeconomically very similarly to the first guy, and neither he had any previous knowledge.
Surprise level went down from gi-normous to merely moderate at this point.
This is a great post, and I’d be interested in seeing you write out a fuller version of what you said to your relative as a top level post, something like “Friendly AI and the Singularity explained for adolescents.”
Also, do you speak English as a second language? If so, I am especially impressed with your writing ability.
On a tangent, am I the only one that doesn’t like the usage of boy, girl, or child to describe adolescents? It seems demeaning, because adolescents are not biologically children, they’ve just been defined to be children by the state. I suppose I’m never going to overturn that usage, but I’d like to know if there is some reason why I shouldn’t be bothered by the common usage of the words for children.
Yes, English is second language for me and I mostly learned it via reading things on the Internet.
Excuse me for the boy/guy confusion, I did not have any particular intent behind the wording. It was an unconscious application of my native language’s tendency to refer to <18 year old males with the “boy” equivalent word. As I’m mostly a lurker I have much less writing than reading experience; currently I usually make dozens of spelling/formulation corrections on longer posts, but some weirdly used words or mistakes are guaranteed to remain in the text.
The boy usage is correct in English as well; I just don’t like that usage, but I’m out of the mainstream.
You’re not. I find it demeaning and more than a little confusing.
“Child” is probably never OK for people older than 12-13, but “girl”, “guy”, and occasionally “boy” are usually used by teens, and often by 20-somethings to describe themselves or each other. (“Boy” usually by females, used with a sexual connotation.)
I’m aware of it, and am actually still getting into the habit of referring to women about my age or younger as women rather than girls. I still trip over it when other people use the words that way, though—I automatically think of 8-year-olds if it’s not very clear who’s being referred to.
Right. “Girl” really has at least two distinct senses, one for children and one for peers/juniors of many ages. “Guy” isn’t used in the first sense, and the second sense of “boy” is more restricted. The first sense of “boy”/”girl” is the most salient one, and thus the default absent further context. I don’t think the first sense needs to poison the second one. But its use in the parent comment this discussion wasn’t all that innocent. (I’ve been attacked before, by a rather extreme feminist, for using it innocently.)
But wouldn’t the knowledge that the AI could potentially do your work be psychologically harmful?
When you play an engaging computer game, does it detract from your experience knowing that all the tasks you are performing are only there for your pleasure, and that the developers could have easily just made you click an “I Win” button without requiring you to do anything else?
I suspect that status effects might be important here. When we play a video game, we choose to do it voluntarily, and so the developers are providing us a service. But if the universe is controlled by an AI, and we have no choice but to play games that it provides us, then it would feel more like being a pet.
The AI could also try to take that into account, I suppose, but I’m not sure what it could do to alleviate the problem without lying to us.
If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
It’s not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I’m really making decisions for two agents: the real me, and the one in FAI’s prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn’t murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It seems to me that free will works rather differently… sort of like you’re in a Newcomb’s Problem that never ends.
It just means that you were mistaken and PL2.0 doesn’t actually allow you to murder. It’s physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.
I would expect that an FAI would not force us to play games, but would make games available for us to choose to play.
It’s not that an FAI would force us to play games, but rather there’s nothing else to do. All the real problems would have been solved already.
That’s not necessarily true. We might still have to build a sturdy bridge to cross a river, it’s just that nobody dies if we mess up.
Likewise, if one’s mind is too advanced for bridge building to not be boring, then there will be other more complex organizations we would want, which the FAI is under no obligation to hand us.
I think we can have a huge set of real problems to solve, even after FAI solves all the needed ones.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient? When you’re building a bridge in that situation, you’re not solving the problem of crossing a river, you’re just using up resources in order to not be bored.
Because it refuses to do so?
If you’re 16 and your parents refuse to buy something for you (that they could afford without too much trouble) and instead make you go out and earn the money to buy it yourself, was solving the problem of how to get the money “just a game”?
Yes, if the parents will always be there to take care of you.
We can wirehead children now.
We want them to be more than that.
The only reason we want that is that civilization would collapse without anyone to bear it. If FAI bears it, there is no pressure on anyone.
What does it mean for FAI to bear civilization? It can give us bridges, but if I’m going to spend time with you, you’d better be socialized. A life of obedient catgirls would harm your ability to deal with real humans (or posthumans)
And ignoring that, I don’t think that we want to be more than we are just in order to get stuff done.
Both of these are things we to to achieve complex values. Some of the things we want are things which can’t be handed to us, and some of those are thing which we can’t achieve if everything which can be handed to us, is handed to us.
The companions FAI creates for you don’t have to be obedient, nor catgirls. Instead, they can be companions that far exceed the value you can get from socializing with fellow humans or posthumans.
Once there is FAI, the best companion for anyone is FAI.
The only reason you want “complex values” is because your environment has inculcated in you that you want them. The reason your environment has inculcated this in you is because such inculcation is necessary in order to have people who will uphold civilization. Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
How rude can I be to my FAI companion before it starts crying in the corner? How rude will I become if it doesn’t? Why didn’t it just build the bridge the first time I asked? then I wouldn’t have to yell. Does she mind that I call her ‘it’?
Proper companions don’t always give you what you want.
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
Edit: Newline: Ok, this is a big deal:
The fact that a value I have is something evolution gave me is not a reason to abandon that value. Pleasure is also something I want because evolution made me want it.
Right now, I want those complex values, and I’m not going to press a button to self modify to stop wanting them
I don’t see why creating perfectly balanced agents would be morally reprehensible—nor why, given such agents, there would be value in interacting with other humans—necessarily less suited to each other’s progress than the agents would be.
It may well be considered morally reprehensible to communicate with other humans, because it may undermine and slow down the personal development that each human would otherwise benefit from in the company of custom-tailored companions, designed perfectly for one’s individual progress.
It may well be morally better for the FAI to make you think that you’re communicating with a ‘real’ human, when in fact you are communicating with an agent specifically designed to provide you with that learning experience.
If these agents are people in a morally significant way, then their needs must be taken into account. FAI can’t just create slave beings. It’s very difficult for me at this point to say whether it’s possible for the FAI to create a being that perfectly meets some human needs, and in turn has all its own needs met just as perfectly. Every new person it creates just adds more complexity to the moral balance. It might be doable, but it might not, and it’s a lot more work-thought-energy to do it that way.
If they are not people, if they are some kind of puppet zombie robot, then we will have billions of humans falling in love with puppet zombie robots. Because that is their only option. And having puppet zombie robot children. Maybe that’s what FAI will conclude is best, but I doubt it.
I actually think that all our current ways of thinking, feeling and going about life would be so antiquated, post-FAI, as a horse buggy on an interstate highway. Once an AI can reforge us into more exalted creatures than we currently are, I’m not sure why anyone would want to continue living (falling in love? having children?) the old fashioned way. It would be as antiquated as the lifestyle of the Amish.
Some people want to be Amish. It seems like your statement could just as well be “I’m not sure why anyone would want to be Amish” and I’m not sure that communicates anything useful.
On the one hand, as long as there are sufficient resources for some people to engage in Amish-like living while not depriving everyone else, that could be okay.
On the other hand, if the AI determines that a different way of being is much preferable to insistance on human traditions, then it has its infinite intelligence at its disposal to convince people to go along for the ride.
If the AI is barred both from modifying people or from using its intelligence to convince them, then still, at one point, resources become scarce, and for the benefit of everyone, the resource consumption of the refuseniks has to be optimized. I can envision a (to them) seamless transition where they continue living an Amish-like lifestyle in a simulation.
What would we want to be exalted for? So we can more completely appreciate our boredom?
It doesn’t make sense to me that we’d get some arbitrary jump in mindpower, and then start an optimized advancement. (we might get some immediate patches, but there will be reasons for them.) Why not pump us all the way to multi-galaxy-brains? Then the growth issues are moot.
Either way, if we’re abandoning our complex evolved values, then we don’t need to be very complex beings at all. If we don’t, then I don’t expect that even our posthuman values will be satisfied by puppet zombie companions.
Is there some reason to believe our current degree of complexity is optimal?
Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer?
If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
Possibly relevant: I already desire paperclips.
I don’t, at all. Also you’re conflating our complexity with the complexity of our values.
I think that our growth will best start from a point relatively close to where we are now in terms of intelligence. We should grow into jupiter brains, but that should be by learning.
I’m not clear on what it is you want to be reforged as, or why. By what measure is postFAI-Dennis better than now-dennis? By what measure is it still ‘Dennis’, and why were those features retained?
The complexity of human value is not good for its being complex. Rather, these are the things we value, there happens to be a lot of them and they are complexly interrelated. Chopping away at huge chunks of them and focusing on pleasure is probablya bad thing, which we would not want.
It may be the case that the FAI will extrapolate much more complex values, or much simpler values, but our current values must be the starting point and our current values are complex.
This is an extreme statement about everyone’s preference, not even your own preference or your own belief about your own preference. One shouldn’t jump that far.
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
By that reasoning, World of Warcraft is not a game because the admins can’t make me level 80 on day 1, because that’s not what their preferences tell them to do… Or am I missing your point?
I’m attacking a specific argument that “FAI could just flick a switch”. Whether it moves your conclusion about the described situation being a game depends on how genuine your argument for it being a game was and on how much you accept my counter-argument.
Could one of you précis the disagreement in a little more detail and with background? When you and Wei Dai disagree, I’d really like to understand the discussion better, but the discussion it sprang out of doesn’t seem all that enlightening—thanks!
I originally said that post-FAI, we’d have no real problems to solve, so everything we do would be like playing games, and we’d take a status hit because of that. Nesov allegedly found a way to recast the situation so that we can avoid taking the status hit, but I remain unconvinced. I admit this is one of our more trivial discussions. :)
I originally didn’t bother to do so explicitly, only wrote this reply that seems to have not been understood, but in light of Eliezer’s post about flow of the argument, I’ll recast the structure I see in the last few comments:
Wei: Bridge-building is a game, because FAI could just flick a switch. (Y leads to X having property S; Y=”could flick a switch”, X=”FAI’s world”, S=”is a game”)
Vlad: No it couldn’t, its preference (for us having to make an effort) makes it impossible for that to happen. (Y doesn’t hold for X)
Wei: But there are games where players don’t get free charity as well. (Z have property S without needing Y)
Vlad: I’m merely saying that Y doesn’t hold, so if Y held any weight in the argument that “Y leads to X having property S”, then having established not-Y, I’ve weakened the support for X having property S, and at least refuted the particular argument for X having property S, even if I haven’t convincingly argued that X doesn’t have property S overall.
When I wrote “Bridge-building is a game, because FAI could just flick a switch” the intended meaning of “could” was “could if it wanted to”. When I cited WoW later, I was trying to point out that your interpretation of “could” as “could given its actual preferences” can’t be what I intended because it would rule out WoW as a game. I guess I failed to get my point across, and then thought the argument was too inconsequential to continue. But now that you’re using it as an example, I want to clear up what happened.
Is this a disagreement that is more about the meaning of words than anything else? I think you are Nesov are disagreeing about the meanings of game and real problems or maybe problems. Both of you defining those terms would help.
In the short term, I think you are correct. However, in the long term, I’m hoping that the FAI will find a non-disastrous way for us to become superintelligent ourselves, and therefore again be able to participate in solving real problems.
When I build a bridge in a game, I get an in-game reward. I don’t get easier transport to anywhere. If I neglect to build the bridge or play the game at all, I still get to use all the bridges otherwise available to me. ‘Real’ bridges are at the top level of reality available to me. Even the simulation hypothesis does not make these bridges a game.
Why do I want to cross the bridge? To not be bored, to find my love, or to meet some other human value. The AI could do that for me too, and cut out the need for transport. If we follow that logic even a short way, it would be obvious that we don’t want the AI doing certain things for us. If there is danger of us being harmed because the FAI could help but won’t it need merely help a little more, getting closer to those things we want to do ourselves. If we’re in danger of being harmed by our own laziness, it need only back off. (It might do this at the level of the entires species, for all time, so individuals might be bored or angry or not cross rivers as soon as they would like, but it might optimize for everybody moment to moment.)
If there are things we couldn’t stand to have a machine do, and couldn’t stand for it to not help us with, I think those would be incoherent volitions.
One way I imagine that would work for me is if the AI explained with sufficient persuasion that there simply isn’t anything more meaningful for me to do than to play games. If there actually is something more meaningful for people to do, then the AI should probably let people do that.
An AI could persuade you to become a kangaroo—this is a broken criterion for decision-making.
I am skeptical that rationality and exponentially greater-than-human intelligence actually confers this power.
It doesn’t matter if it does or not; the fact that you can conceive of situations where persuadability would fail as a criterion immediately means it fails.
Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.
This is a category error. Meaningfulness is in your mind and in intersubjective constructions, not in the objective world. There is no fact of the matter for the AI to explain to you.
o shit
I had essentially this conversation with my sister-in-law’s boyfriend (Canadian art student in his early twenties) just about four weeks ago. Didn’t get to the boredom question, but did talk a bit about cryonics. Took about 25 minutes.
There seem to be two ways for the AI thing to click. Some people click and go “Oh yeah, that makes sense,” and then if you ask them about it they’ll tell you they believe it’s a problem, but they won’t change their behavior very much otherwise. The other people click and go, “0_0 Wtf am I doing with my life???” and then they move to the Bay Area or New York and join the other people devoting their every resource to preventing paperclip maximizers and the like. Which type were your people, and what do you think causes the difference?