Seems like the fun theory sequence answers this question. In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way that builds a more exciting world by I.e. not giving us all the answers or solutions we might momentarily want.
In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way...
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I am basically alright with that, considering that “artificial problems” would still include social challenge. Much of art and sport and games and competition and the other enjoyable aspects of multi-ape-systems would probably still go on in some form; certainly as I understand the choices available to me I would definitely prefer that they do.
Could you imagine reading science fiction in a world where a process like CEV makes sure that the whole universe suits the needs of humanity?
Yes, absolutely I can.
Right now, we write and read lots and lots of fiction about times from the past that we would not like to live in. Or, about variations on those periods with some extra fun stuff (i.e. magic spells, fire-breathing dragons, benevolent non-figurehead monarchies) that are nonetheless not safe or comfortable places to live. It can be very entertaining and engaging to read about worlds that we ourselves would not want to be stuck in, such as a historical-fantasy novel about the year 2000 when people could still die of natural causes, despite having flying broomsticks.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton.
I’ve told you before this seems like a false dichotomy. Did you give a counterargument somewhere that I missed?
Seems to me the situation has an obvious parallel in the world today. And since children would presumably still exist when we start the process, they can serve as more than a metaphor. Now children sometimes want to avoid growing up, but I don’t know of any such case we can’t explain as simple fear of death. That certainly suffices for my own past behavior. And you assume we’ve fixed death through CEV.
It therefore seems like you’re assuming that we’d desire to stifle our children’s curiosity and their desire to grow, rather than letting them become as smart as the FAI and perhaps dragging us along with them. Either that or you have some unstated objection to super-intelligence as a concrete ideal for our future selves to aspire to.
I believe Eliezer’s discussed this issue before. If I remember correctly, he suggested that a CEV-built FAI might rearrange things slightly and then disappear rather than automatically solving all of our problems for us. Of course, we might be able to create an FAI that doesn’t do that by following a different building plan, but I don’t think that says anything about CEV. I’ll try to find that link...
Edit: I may be looking for a comment, not a top-level post. Here’s some related material, in case I can’t find what I’m looking for.
It doesn’t have to turn itself off, it just has to stop taking requests.
Come to think of it, it doesn’t even have to do that. If I were such an oracle and the world were as you describe it, I might well establish the policy that before I solve problem A on humanity’s behalf, I require that humanity solve problem B on their own behalf.
Sure, B is an artificially created problem, but it’s artificially created by me, and humanity has no choice in the matter.
Or it could even focus on the most pressing problems, and leave stuff around the margins for us to work on. Just because it has vast resources doesn’t mean it has infinite resources.
As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
Is it a terminal value for you that you want to have non-artificial problems in your life? Or is it merely an instrumental value on the path to something else (like “purpose”, or “emotional fulfilment”, or “fun”, or “excitement”, etc)?
all possible problems are artificially created problems that we have chosen to solve the slow way.
I agree that if this were to happen, it seems like a bad thing (I’ll call this the “keeping it real” preference). But it seems like the point when this happens is the point where humanity has the opportunity to create a value-optimizing singleton, not the point where it actually creates one.
In other words, if we could have built an FAI to solve all of our problems for us but didn’t, then any remaining problems are in a sense “artificial”.
But they seem less artificial in that case. And if there is a continuum of possibilities between “no FAI” and “FAI that immediately solves all our problems for us”, then the FAI may be able to strike a happy balance between “solving problems” and “keeping it real”.
That said, I’m not sure how well CEV addresses this. I guess it would treat “keeping it real” to be a human preference and try and satisfy it along with everything else. But it may be that if it even gets to that stage, the ability to “keep it real” has been permanently lost.
Seems like the fun theory sequence answers this question. In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way that builds a more exciting world by I.e. not giving us all the answers or solutions we might momentarily want.
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I am basically alright with that, considering that “artificial problems” would still include social challenge. Much of art and sport and games and competition and the other enjoyable aspects of multi-ape-systems would probably still go on in some form; certainly as I understand the choices available to me I would definitely prefer that they do.
Yes, absolutely I can.
Right now, we write and read lots and lots of fiction about times from the past that we would not like to live in. Or, about variations on those periods with some extra fun stuff (i.e. magic spells, fire-breathing dragons, benevolent non-figurehead monarchies) that are nonetheless not safe or comfortable places to live. It can be very entertaining and engaging to read about worlds that we ourselves would not want to be stuck in, such as a historical-fantasy novel about the year 2000 when people could still die of natural causes, despite having flying broomsticks.
I’ve told you before this seems like a false dichotomy. Did you give a counterargument somewhere that I missed?
Seems to me the situation has an obvious parallel in the world today. And since children would presumably still exist when we start the process, they can serve as more than a metaphor. Now children sometimes want to avoid growing up, but I don’t know of any such case we can’t explain as simple fear of death. That certainly suffices for my own past behavior. And you assume we’ve fixed death through CEV.
It therefore seems like you’re assuming that we’d desire to stifle our children’s curiosity and their desire to grow, rather than letting them become as smart as the FAI and perhaps dragging us along with them. Either that or you have some unstated objection to super-intelligence as a concrete ideal for our future selves to aspire to.
They can be afraid of having to deal with adult responsibilities, or the physical symptoms of aging after they’ve reached their prime.
I believe Eliezer’s discussed this issue before. If I remember correctly, he suggested that a CEV-built FAI might rearrange things slightly and then disappear rather than automatically solving all of our problems for us. Of course, we might be able to create an FAI that doesn’t do that by following a different building plan, but I don’t think that says anything about CEV. I’ll try to find that link...
Edit: I may be looking for a comment, not a top-level post. Here’s some related material, in case I can’t find what I’m looking for.
It doesn’t have to turn itself off, it just has to stop taking requests.
Come to think of it, it doesn’t even have to do that. If I were such an oracle and the world were as you describe it, I might well establish the policy that before I solve problem A on humanity’s behalf, I require that humanity solve problem B on their own behalf.
Sure, B is an artificially created problem, but it’s artificially created by me, and humanity has no choice in the matter.
Or it could even focus on the most pressing problems, and leave stuff around the margins for us to work on. Just because it has vast resources doesn’t mean it has infinite resources.
Is it a terminal value for you that you want to have non-artificial problems in your life? Or is it merely an instrumental value on the path to something else (like “purpose”, or “emotional fulfilment”, or “fun”, or “excitement”, etc)?
I agree that if this were to happen, it seems like a bad thing (I’ll call this the “keeping it real” preference). But it seems like the point when this happens is the point where humanity has the opportunity to create a value-optimizing singleton, not the point where it actually creates one.
In other words, if we could have built an FAI to solve all of our problems for us but didn’t, then any remaining problems are in a sense “artificial”.
But they seem less artificial in that case. And if there is a continuum of possibilities between “no FAI” and “FAI that immediately solves all our problems for us”, then the FAI may be able to strike a happy balance between “solving problems” and “keeping it real”.
That said, I’m not sure how well CEV addresses this. I guess it would treat “keeping it real” to be a human preference and try and satisfy it along with everything else. But it may be that if it even gets to that stage, the ability to “keep it real” has been permanently lost.
Try formalizing the argument and defining variables. I don’t think it will hold together.