That’s not necessarily true. We might still have to build a sturdy bridge to cross a river, it’s just that nobody dies if we mess up.
Likewise, if one’s mind is too advanced for bridge building to not be boring, then there will be other more complex organizations we would want, which the FAI is under no obligation to hand us.
I think we can have a huge set of real problems to solve, even after FAI solves all the needed ones.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient? When you’re building a bridge in that situation, you’re not solving the problem of crossing a river, you’re just using up resources in order to not be bored.
If you’re 16 and your parents refuse to buy something for you (that they could afford without too much trouble) and instead make you go out and earn the money to buy it yourself, was solving the problem of how to get the money “just a game”?
What does it mean for FAI to bear civilization? It can give us bridges, but if I’m going to spend time with you, you’d better be socialized. A life of obedient catgirls would harm your ability to deal with real humans (or posthumans)
And ignoring that, I don’t think that we want to be more than we are just in order to get stuff done. Both of these are things we to to achieve complex values. Some of the things we want are things which can’t be handed to us, and some of those are thing which we can’t achieve if everything which can be handed to us, is handed to us.
The companions FAI creates for you don’t have to be obedient, nor catgirls. Instead, they can be companions that far exceed the value you can get from socializing with fellow humans or posthumans.
Once there is FAI, the best companion for anyone is FAI.
The only reason you want “complex values” is because your environment has inculcated in you that you want them. The reason your environment has inculcated this in you is because such inculcation is necessary in order to have people who will uphold civilization. Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
Once there is FAI, the best companion for anyone is FAI.
How rude can I be to my FAI companion before it starts crying in the corner? How rude will I become if it doesn’t? Why didn’t it just build the bridge the first time I asked? then I wouldn’t have to yell. Does she mind that I call her ‘it’?
Proper companions don’t always give you what you want.
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
Edit: Newline:
Ok, this is a big deal:
The fact that a value I have is something evolution gave me is not a reason to abandon that value. Pleasure is also something I want because evolution made me want it.
Right now, I want those complex values, and I’m not going to press a button to self modify to stop wanting them
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
I don’t see why creating perfectly balanced agents would be morally reprehensible—nor why, given such agents, there would be value in interacting with other humans—necessarily less suited to each other’s progress than the agents would be.
It may well be considered morally reprehensible to communicate with other humans, because it may undermine and slow down the personal development that each human would otherwise benefit from in the company of custom-tailored companions, designed perfectly for one’s individual progress.
It may well be morally better for the FAI to make you think that you’re communicating with a ‘real’ human, when in fact you are communicating with an agent specifically designed to provide you with that learning experience.
If these agents are people in a morally significant way, then their needs must be taken into account. FAI can’t just create slave beings. It’s very difficult for me at this point to say whether it’s possible for the FAI to create a being that perfectly meets some human needs, and in turn has all its own needs met just as perfectly. Every new person it creates just adds more complexity to the moral balance. It might be doable, but it might not, and it’s a lot more work-thought-energy to do it that way.
If they are not people, if they are some kind of puppet zombie robot, then we will have billions of humans falling in love with puppet zombie robots. Because that is their only option. And having puppet zombie robot children. Maybe that’s what FAI will conclude is best, but I doubt it.
I actually think that all our current ways of thinking, feeling and going about life would be so antiquated, post-FAI, as a horse buggy on an interstate highway. Once an AI can reforge us into more exalted creatures than we currently are, I’m not sure why anyone would want to continue living (falling in love? having children?) the old fashioned way. It would be as antiquated as the lifestyle of the Amish.
It would be as antiquated as the lifestyle of the Amish.
Some people want to be Amish. It seems like your statement could just as well be “I’m not sure why anyone would want to be Amish” and I’m not sure that communicates anything useful.
On the one hand, as long as there are sufficient resources for some people to engage in Amish-like living while not depriving everyone else, that could be okay.
On the other hand, if the AI determines that a different way of being is much preferable to insistance on human traditions, then it has its infinite intelligence at its disposal to convince people to go along for the ride.
If the AI is barred both from modifying people or from using its intelligence to convince them, then still, at one point, resources become scarce, and for the benefit of everyone, the resource consumption of the refuseniks has to be optimized. I can envision a (to them) seamless transition where they continue living an Amish-like lifestyle in a simulation.
What would we want to be exalted for? So we can more completely appreciate our boredom?
It doesn’t make sense to me that we’d get some arbitrary jump in mindpower, and then start an optimized advancement. (we might get some immediate patches, but there will be reasons for them.) Why not pump us all the way to multi-galaxy-brains? Then the growth issues are moot.
Either way, if we’re abandoning our complex evolved values, then we don’t need to be very complex beings at all. If we don’t, then I don’t expect that even our posthuman values will be satisfied by puppet zombie companions.
Is there some reason to believe our current degree of complexity is optimal?
Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer?
If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
Is there some reason to believe our current degree of complexity is optimal?
I don’t, at all. Also you’re conflating our complexity with the complexity of our values.
I think that our growth will best start from a point relatively close to where we are now in terms of intelligence. We should grow into jupiter brains, but that should be by learning.
I’m not clear on what it is you want to be reforged as, or why. By what measure is postFAI-Dennis better than now-dennis? By what measure is it still ‘Dennis’, and why were those features retained?
The complexity of human value is not good for its being complex. Rather, these are the things we value, there happens to be a lot of them and they are complexly interrelated. Chopping away at huge chunks of them and focusing on pleasure is probablya bad thing, which we would not want.
It may be the case that the FAI will extrapolate much more complex values, or much simpler values, but our current values must be the starting point and our current values are complex.
The only reason we want that is that civilization would collapse without anyone to bear it. If FAI bears it, there is no pressure on anyone.
This is an extreme statement about everyone’s preference, not even your own preference or your own belief about your own preference. One shouldn’t jump that far.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient?
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
By that reasoning, World of Warcraft is not a game because the admins can’t make me level 80 on day 1, because that’s not what their preferences tell them to do… Or am I missing your point?
I’m attacking a specific argument that “FAI could just flick a switch”. Whether it moves your conclusion about the described situation being a game depends on how genuine your argument for it being a game was and on how much you accept my counter-argument.
Could one of you précis the disagreement in a little more detail and with background? When you and Wei Dai disagree, I’d really like to understand the discussion better, but the discussion it sprang out of doesn’t seem all that enlightening—thanks!
I originally said that post-FAI, we’d have no real problems to solve, so everything we do would be like playing games, and we’d take a status hit because of that. Nesov allegedly found a way to recast the situation so that we can avoid taking the status hit, but I remain unconvinced. I admit this is one of our more trivial discussions. :)
I originally didn’t bother to do so explicitly, only wrote this reply that seems to have not been understood, but in light of Eliezer’s post about flow of the argument, I’ll recast the structure I see in the last few comments:
Wei: Bridge-building is a game, because FAI could just flick a switch. (Y leads to X having property S; Y=”could flick a switch”, X=”FAI’s world”, S=”is a game”) Vlad: No it couldn’t, its preference (for us having to make an effort) makes it impossible for that to happen. (Y doesn’t hold for X) Wei: But there are games where players don’t get free charity as well. (Z have property S without needing Y) Vlad: I’m merely saying that Y doesn’t hold, so if Y held any weight in the argument that “Y leads to X having property S”, then having established not-Y, I’ve weakened the support for X having property S, and at least refuted the particular argument for X having property S, even if I haven’t convincingly argued that X doesn’t have property S overall.
When I wrote “Bridge-building is a game, because FAI could just flick a switch” the intended meaning of “could” was “could if it wanted to”. When I cited WoW later, I was trying to point out that your interpretation of “could” as “could given its actual preferences” can’t be what I intended because it would rule out WoW as a game. I guess I failed to get my point across, and then thought the argument was too inconsequential to continue. But now that you’re using it as an example, I want to clear up what happened.
Is this a disagreement that is more about the meaning of words than anything else? I think you are Nesov are disagreeing about the meanings of game and real problems or maybe problems. Both of you defining those terms would help.
In the short term, I think you are correct. However, in the long term, I’m hoping that the FAI will find a non-disastrous way for us to become superintelligent ourselves, and therefore again be able to participate in solving real problems.
When I build a bridge in a game, I get an in-game reward. I don’t get easier transport to anywhere. If I neglect to build the bridge or play the game at all, I still get to use all the bridges otherwise available to me. ‘Real’ bridges are at the top level of reality available to me. Even the simulation hypothesis does not make these bridges a game.
Why do I want to cross the bridge? To not be bored, to find my love, or to meet some other human value. The AI could do that for me too, and cut out the need for transport. If we follow that logic even a short way, it would be obvious that we don’t want the AI doing certain things for us. If there is danger of us being harmed because the FAI could help but won’t it need merely help a little more, getting closer to those things we want to do ourselves. If we’re in danger of being harmed by our own laziness, it need only back off. (It might do this at the level of the entires species, for all time, so individuals might be bored or angry or not cross rivers as soon as they would like, but it might optimize for everybody moment to moment.)
If there are things we couldn’t stand to have a machine do, and couldn’t stand for it to not help us with, I think those would be incoherent volitions.
It’s not that an FAI would force us to play games, but rather there’s nothing else to do. All the real problems would have been solved already.
That’s not necessarily true. We might still have to build a sturdy bridge to cross a river, it’s just that nobody dies if we mess up.
Likewise, if one’s mind is too advanced for bridge building to not be boring, then there will be other more complex organizations we would want, which the FAI is under no obligation to hand us.
I think we can have a huge set of real problems to solve, even after FAI solves all the needed ones.
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient? When you’re building a bridge in that situation, you’re not solving the problem of crossing a river, you’re just using up resources in order to not be bored.
Because it refuses to do so?
If you’re 16 and your parents refuse to buy something for you (that they could afford without too much trouble) and instead make you go out and earn the money to buy it yourself, was solving the problem of how to get the money “just a game”?
Yes, if the parents will always be there to take care of you.
We can wirehead children now.
We want them to be more than that.
The only reason we want that is that civilization would collapse without anyone to bear it. If FAI bears it, there is no pressure on anyone.
What does it mean for FAI to bear civilization? It can give us bridges, but if I’m going to spend time with you, you’d better be socialized. A life of obedient catgirls would harm your ability to deal with real humans (or posthumans)
And ignoring that, I don’t think that we want to be more than we are just in order to get stuff done.
Both of these are things we to to achieve complex values. Some of the things we want are things which can’t be handed to us, and some of those are thing which we can’t achieve if everything which can be handed to us, is handed to us.
The companions FAI creates for you don’t have to be obedient, nor catgirls. Instead, they can be companions that far exceed the value you can get from socializing with fellow humans or posthumans.
Once there is FAI, the best companion for anyone is FAI.
The only reason you want “complex values” is because your environment has inculcated in you that you want them. The reason your environment has inculcated this in you is because such inculcation is necessary in order to have people who will uphold civilization. Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
How rude can I be to my FAI companion before it starts crying in the corner? How rude will I become if it doesn’t? Why didn’t it just build the bridge the first time I asked? then I wouldn’t have to yell. Does she mind that I call her ‘it’?
Proper companions don’t always give you what you want.
Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn’t in itself morally reprehensible, I think the is a value for interacting with other ‘real’ humans.
Edit: Newline: Ok, this is a big deal:
The fact that a value I have is something evolution gave me is not a reason to abandon that value. Pleasure is also something I want because evolution made me want it.
Right now, I want those complex values, and I’m not going to press a button to self modify to stop wanting them
I don’t see why creating perfectly balanced agents would be morally reprehensible—nor why, given such agents, there would be value in interacting with other humans—necessarily less suited to each other’s progress than the agents would be.
It may well be considered morally reprehensible to communicate with other humans, because it may undermine and slow down the personal development that each human would otherwise benefit from in the company of custom-tailored companions, designed perfectly for one’s individual progress.
It may well be morally better for the FAI to make you think that you’re communicating with a ‘real’ human, when in fact you are communicating with an agent specifically designed to provide you with that learning experience.
If these agents are people in a morally significant way, then their needs must be taken into account. FAI can’t just create slave beings. It’s very difficult for me at this point to say whether it’s possible for the FAI to create a being that perfectly meets some human needs, and in turn has all its own needs met just as perfectly. Every new person it creates just adds more complexity to the moral balance. It might be doable, but it might not, and it’s a lot more work-thought-energy to do it that way.
If they are not people, if they are some kind of puppet zombie robot, then we will have billions of humans falling in love with puppet zombie robots. Because that is their only option. And having puppet zombie robot children. Maybe that’s what FAI will conclude is best, but I doubt it.
I actually think that all our current ways of thinking, feeling and going about life would be so antiquated, post-FAI, as a horse buggy on an interstate highway. Once an AI can reforge us into more exalted creatures than we currently are, I’m not sure why anyone would want to continue living (falling in love? having children?) the old fashioned way. It would be as antiquated as the lifestyle of the Amish.
Some people want to be Amish. It seems like your statement could just as well be “I’m not sure why anyone would want to be Amish” and I’m not sure that communicates anything useful.
On the one hand, as long as there are sufficient resources for some people to engage in Amish-like living while not depriving everyone else, that could be okay.
On the other hand, if the AI determines that a different way of being is much preferable to insistance on human traditions, then it has its infinite intelligence at its disposal to convince people to go along for the ride.
If the AI is barred both from modifying people or from using its intelligence to convince them, then still, at one point, resources become scarce, and for the benefit of everyone, the resource consumption of the refuseniks has to be optimized. I can envision a (to them) seamless transition where they continue living an Amish-like lifestyle in a simulation.
What would we want to be exalted for? So we can more completely appreciate our boredom?
It doesn’t make sense to me that we’d get some arbitrary jump in mindpower, and then start an optimized advancement. (we might get some immediate patches, but there will be reasons for them.) Why not pump us all the way to multi-galaxy-brains? Then the growth issues are moot.
Either way, if we’re abandoning our complex evolved values, then we don’t need to be very complex beings at all. If we don’t, then I don’t expect that even our posthuman values will be satisfied by puppet zombie companions.
Is there some reason to believe our current degree of complexity is optimal?
Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer?
If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
Possibly relevant: I already desire paperclips.
I don’t, at all. Also you’re conflating our complexity with the complexity of our values.
I think that our growth will best start from a point relatively close to where we are now in terms of intelligence. We should grow into jupiter brains, but that should be by learning.
I’m not clear on what it is you want to be reforged as, or why. By what measure is postFAI-Dennis better than now-dennis? By what measure is it still ‘Dennis’, and why were those features retained?
The complexity of human value is not good for its being complex. Rather, these are the things we value, there happens to be a lot of them and they are complexly interrelated. Chopping away at huge chunks of them and focusing on pleasure is probablya bad thing, which we would not want.
It may be the case that the FAI will extrapolate much more complex values, or much simpler values, but our current values must be the starting point and our current values are complex.
This is an extreme statement about everyone’s preference, not even your own preference or your own belief about your own preference. One shouldn’t jump that far.
It can’t actually do that, because it’s not what its preference tells it to do. The same way you can’t jump out of the window given you are not suicidal.
By that reasoning, World of Warcraft is not a game because the admins can’t make me level 80 on day 1, because that’s not what their preferences tell them to do… Or am I missing your point?
I’m attacking a specific argument that “FAI could just flick a switch”. Whether it moves your conclusion about the described situation being a game depends on how genuine your argument for it being a game was and on how much you accept my counter-argument.
Could one of you précis the disagreement in a little more detail and with background? When you and Wei Dai disagree, I’d really like to understand the discussion better, but the discussion it sprang out of doesn’t seem all that enlightening—thanks!
I originally said that post-FAI, we’d have no real problems to solve, so everything we do would be like playing games, and we’d take a status hit because of that. Nesov allegedly found a way to recast the situation so that we can avoid taking the status hit, but I remain unconvinced. I admit this is one of our more trivial discussions. :)
I originally didn’t bother to do so explicitly, only wrote this reply that seems to have not been understood, but in light of Eliezer’s post about flow of the argument, I’ll recast the structure I see in the last few comments:
Wei: Bridge-building is a game, because FAI could just flick a switch. (Y leads to X having property S; Y=”could flick a switch”, X=”FAI’s world”, S=”is a game”)
Vlad: No it couldn’t, its preference (for us having to make an effort) makes it impossible for that to happen. (Y doesn’t hold for X)
Wei: But there are games where players don’t get free charity as well. (Z have property S without needing Y)
Vlad: I’m merely saying that Y doesn’t hold, so if Y held any weight in the argument that “Y leads to X having property S”, then having established not-Y, I’ve weakened the support for X having property S, and at least refuted the particular argument for X having property S, even if I haven’t convincingly argued that X doesn’t have property S overall.
When I wrote “Bridge-building is a game, because FAI could just flick a switch” the intended meaning of “could” was “could if it wanted to”. When I cited WoW later, I was trying to point out that your interpretation of “could” as “could given its actual preferences” can’t be what I intended because it would rule out WoW as a game. I guess I failed to get my point across, and then thought the argument was too inconsequential to continue. But now that you’re using it as an example, I want to clear up what happened.
Is this a disagreement that is more about the meaning of words than anything else? I think you are Nesov are disagreeing about the meanings of game and real problems or maybe problems. Both of you defining those terms would help.
In the short term, I think you are correct. However, in the long term, I’m hoping that the FAI will find a non-disastrous way for us to become superintelligent ourselves, and therefore again be able to participate in solving real problems.
When I build a bridge in a game, I get an in-game reward. I don’t get easier transport to anywhere. If I neglect to build the bridge or play the game at all, I still get to use all the bridges otherwise available to me. ‘Real’ bridges are at the top level of reality available to me. Even the simulation hypothesis does not make these bridges a game.
Why do I want to cross the bridge? To not be bored, to find my love, or to meet some other human value. The AI could do that for me too, and cut out the need for transport. If we follow that logic even a short way, it would be obvious that we don’t want the AI doing certain things for us. If there is danger of us being harmed because the FAI could help but won’t it need merely help a little more, getting closer to those things we want to do ourselves. If we’re in danger of being harmed by our own laziness, it need only back off. (It might do this at the level of the entires species, for all time, so individuals might be bored or angry or not cross rivers as soon as they would like, but it might optimize for everybody moment to moment.)
If there are things we couldn’t stand to have a machine do, and couldn’t stand for it to not help us with, I think those would be incoherent volitions.