This example, and a few others in your post that follow the general pattern of “If you already knew X, then you would have no volition to go and learn X”, don’t apply to CEV as I understand it.
Human values are complex and little perturbations can influence them considerably. Just think about the knowledge that a process like CEV exists in the first place and how it would influence our curiosity, science and status-seeking behavior.
If there was a process that could figure out everything conditional upon the volition of humanity, then I at least would perceive the value of discovery to be considerably diminished. After all the only reason for why I don’t already know something might at most be because humanity decided to figure it out on its own. But that would be an artificially created barrier. It will never be the same as before.
Take for example an artificial intelligence researcher. It would be absurd to try to figure out how general intelligence works, given the existence of CEV, and then receive praise from their fellow humans. Or how would you feel about a nobel prize for economics if humanity already learnt all there is to learn about economics by creating the CEV process? Would the nobel prize have the same value as before?
Another example is science fiction. Could you imagine reading science fiction in a world where a process like CEV makes sure that the whole universe suits the needs of humanity? Reading about aliens or AI would become ridiculous.
Those are just a few quick examples. That there won’t be any problems that need to be fixed anymore is another. My point is, are we sure that CEV won’t cripple a lot of activities, at least for nerds like me?
This line of reasoning is hardly limited to CEV. I’m reminded of Bishop Wright’s apocryphal sermon about how we’ve pretty much discovered everything there is to discover.
Sure, if we progress far enough, fast enough, that all the meaningful problems I can conceive of are solved and all the meaningful goals I can imagine are reached, then there will only be two kinds of people: people solving problems I can’t conceive of in order to achieve goals I can’t imagine, and people living without meaningful goals and problems. We have the latter group today; I suspect we’ll have them tomorrow as well.
The possibility of their continued existence—even the possibility that everyone will be in that category—doesn’t strike me as a good enough reason to avoid such progress.
I’m also tempted to point out that there’s something inherently inconsistent about a future where the absence of meaningful problems to solve is a meaningful problem, although I suspect that’s just playing with words.
I’m also tempted to point out that there’s something inherently inconsistent about a future where the absence of meaningful problems to solve is a meaningful problem, although I suspect that’s just playing with words.
I don’t think that’s just playing with words. If we’ve solved all the problems, then we’ve solved that problem. We shouldn’t assume a priori that solving that problem is impossible.
I agree with you that we shouldn’t assume that finding meaningful activities for people to engage in as we progress is impossible. Not least of which because I think it is possible.
Actually, I’d say something stronger: I think right now we suck as a species at understanding what sorts of activities are meaningful and how to build a social infrastructure that creates such activities, and that we are suffering for the lack of it (and have done for millenia), and that we are just starting to develop tools with which to engage with this problem efficiently. In a few generations we might really see some progress in this area.
Nevertheless, I suspect that an argument of the form “lack of meaningful activity due to the solving of all problems is a logical contradiction, because such a lack of meaningful activity would then be an unsolved problem” is just playing with words, because the supposed contradiction is due entirely to the fact that the word “problem” means subtly different things in its two uses in that sentence.
I don’t mean anything deep by it, just that for example a system might be able to optimize our environment to .99 human-optimal (which is pretty well approximated by the phrase “solving all problems”) and thereby create, say, a pervasive and crippling sense of ennui that it can’t yet resolve (which would constitute a “problem”). There’s no contradiction in that scenario; the illusion of contradiction is created entirely by the sloppiness of language.
I don’t think I follow; if the environment is .99 human-optimal, then that remaining .01 gap implies that there are some problems remain to be solved, however few or minor, right?
It might simply be impossible to solve all problems, because of conflicting dependencies.
Yes, I agree that the remaining .01 gap represents problems that remain to be solved, which implies that “solving all problems” doesn’t literally apply to that scenario. If you’re suggesting that therefore such a scenario isn’t well-enough approximated by the phrase “solving all problems” to justify the phrase’s use, we have different understandings of the level of justification required.
The problem is not that all problems might be solved at some point but that as long as we don’t turn ourselves into something similarly capable as the CEV process, then there exists an oracle that we could ask if we wanted to. The existence of such an oracle is that which is diminishing the value of research and discovery.
I agree that the existence of such an oracle closes off certain avenues of research and discovery.
I’m not sure that keeping those avenues of research and discovery open is particularly important, but I agree that it might be.
If it turns out that the availability such an oracle closing off certain avenues is an important human problem, it seems to follow that any system capable of and motivated to solve human problems will ensure that no such oracle is available.
The existence of such an oracle is that which is diminishing the value of research and discovery
You seem to be saying that research and discovery has some intrinsic value, in addition to the benefits of actually discovering things and understanding them. If so, what is this value ?
The only answer I can think of is something like, “learning about new avenues of research that the oracle had not yet explored”, but I’m not sure whether that makes sense or not—since the perfect oracle would explore every avenue of research, and an imperfect oracle would strive toward perfection (as long as the oracle is rational).
Well, the process of research and discovery can itself be enjoyable. That said, I don’t feel that there is a need to hold onto our current enjoyable activities if a CEV can create novel superior ways for us to have fun.
I would posit that divergent behaviors and approaches to the norms will still occur, despite the existence of such an oracle just for the sake of imagination, exploration, and the enjoyment of the process itself. Such oracle would also be aware of the existence of unknown future factors, and the benefits of diverse approaches to problems in the face of factors with unknown long term benefits and viability until certain processes has been executed. As you said, such an oracle would then try to explore every avenue of research, while still focusing on the ones deemed most likely to be fruitful. Such oracle should also be good on self-reflection, and able to question its own approaches and the various perspectives it is able to subsume. After all, isn’t self introspection and self reflections part of how one improve themselves?
Then there’s the Fun theory sequence that DSimon have posted about.
Seems like the fun theory sequence answers this question. In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way that builds a more exciting world by I.e. not giving us all the answers or solutions we might momentarily want.
In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way...
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I am basically alright with that, considering that “artificial problems” would still include social challenge. Much of art and sport and games and competition and the other enjoyable aspects of multi-ape-systems would probably still go on in some form; certainly as I understand the choices available to me I would definitely prefer that they do.
Could you imagine reading science fiction in a world where a process like CEV makes sure that the whole universe suits the needs of humanity?
Yes, absolutely I can.
Right now, we write and read lots and lots of fiction about times from the past that we would not like to live in. Or, about variations on those periods with some extra fun stuff (i.e. magic spells, fire-breathing dragons, benevolent non-figurehead monarchies) that are nonetheless not safe or comfortable places to live. It can be very entertaining and engaging to read about worlds that we ourselves would not want to be stuck in, such as a historical-fantasy novel about the year 2000 when people could still die of natural causes, despite having flying broomsticks.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton.
I’ve told you before this seems like a false dichotomy. Did you give a counterargument somewhere that I missed?
Seems to me the situation has an obvious parallel in the world today. And since children would presumably still exist when we start the process, they can serve as more than a metaphor. Now children sometimes want to avoid growing up, but I don’t know of any such case we can’t explain as simple fear of death. That certainly suffices for my own past behavior. And you assume we’ve fixed death through CEV.
It therefore seems like you’re assuming that we’d desire to stifle our children’s curiosity and their desire to grow, rather than letting them become as smart as the FAI and perhaps dragging us along with them. Either that or you have some unstated objection to super-intelligence as a concrete ideal for our future selves to aspire to.
I believe Eliezer’s discussed this issue before. If I remember correctly, he suggested that a CEV-built FAI might rearrange things slightly and then disappear rather than automatically solving all of our problems for us. Of course, we might be able to create an FAI that doesn’t do that by following a different building plan, but I don’t think that says anything about CEV. I’ll try to find that link...
Edit: I may be looking for a comment, not a top-level post. Here’s some related material, in case I can’t find what I’m looking for.
It doesn’t have to turn itself off, it just has to stop taking requests.
Come to think of it, it doesn’t even have to do that. If I were such an oracle and the world were as you describe it, I might well establish the policy that before I solve problem A on humanity’s behalf, I require that humanity solve problem B on their own behalf.
Sure, B is an artificially created problem, but it’s artificially created by me, and humanity has no choice in the matter.
Or it could even focus on the most pressing problems, and leave stuff around the margins for us to work on. Just because it has vast resources doesn’t mean it has infinite resources.
As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
Is it a terminal value for you that you want to have non-artificial problems in your life? Or is it merely an instrumental value on the path to something else (like “purpose”, or “emotional fulfilment”, or “fun”, or “excitement”, etc)?
all possible problems are artificially created problems that we have chosen to solve the slow way.
I agree that if this were to happen, it seems like a bad thing (I’ll call this the “keeping it real” preference). But it seems like the point when this happens is the point where humanity has the opportunity to create a value-optimizing singleton, not the point where it actually creates one.
In other words, if we could have built an FAI to solve all of our problems for us but didn’t, then any remaining problems are in a sense “artificial”.
But they seem less artificial in that case. And if there is a continuum of possibilities between “no FAI” and “FAI that immediately solves all our problems for us”, then the FAI may be able to strike a happy balance between “solving problems” and “keeping it real”.
That said, I’m not sure how well CEV addresses this. I guess it would treat “keeping it real” to be a human preference and try and satisfy it along with everything else. But it may be that if it even gets to that stage, the ability to “keep it real” has been permanently lost.
Human values are complex and little perturbations can influence them considerably. Just think about the knowledge that a process like CEV exists in the first place and how it would influence our curiosity, science and status-seeking behavior.
If there was a process that could figure out everything conditional upon the volition of humanity, then I at least would perceive the value of discovery to be considerably diminished. After all the only reason for why I don’t already know something might at most be because humanity decided to figure it out on its own. But that would be an artificially created barrier. It will never be the same as before.
Take for example an artificial intelligence researcher. It would be absurd to try to figure out how general intelligence works, given the existence of CEV, and then receive praise from their fellow humans. Or how would you feel about a nobel prize for economics if humanity already learnt all there is to learn about economics by creating the CEV process? Would the nobel prize have the same value as before?
Another example is science fiction. Could you imagine reading science fiction in a world where a process like CEV makes sure that the whole universe suits the needs of humanity? Reading about aliens or AI would become ridiculous.
Those are just a few quick examples. That there won’t be any problems that need to be fixed anymore is another. My point is, are we sure that CEV won’t cripple a lot of activities, at least for nerds like me?
This line of reasoning is hardly limited to CEV. I’m reminded of Bishop Wright’s apocryphal sermon about how we’ve pretty much discovered everything there is to discover.
Sure, if we progress far enough, fast enough, that all the meaningful problems I can conceive of are solved and all the meaningful goals I can imagine are reached, then there will only be two kinds of people: people solving problems I can’t conceive of in order to achieve goals I can’t imagine, and people living without meaningful goals and problems. We have the latter group today; I suspect we’ll have them tomorrow as well.
The possibility of their continued existence—even the possibility that everyone will be in that category—doesn’t strike me as a good enough reason to avoid such progress.
I’m also tempted to point out that there’s something inherently inconsistent about a future where the absence of meaningful problems to solve is a meaningful problem, although I suspect that’s just playing with words.
I don’t think that’s just playing with words. If we’ve solved all the problems, then we’ve solved that problem. We shouldn’t assume a priori that solving that problem is impossible.
See also: fun theory.
I agree with you that we shouldn’t assume that finding meaningful activities for people to engage in as we progress is impossible. Not least of which because I think it is possible.
Actually, I’d say something stronger: I think right now we suck as a species at understanding what sorts of activities are meaningful and how to build a social infrastructure that creates such activities, and that we are suffering for the lack of it (and have done for millenia), and that we are just starting to develop tools with which to engage with this problem efficiently. In a few generations we might really see some progress in this area.
Nevertheless, I suspect that an argument of the form “lack of meaningful activity due to the solving of all problems is a logical contradiction, because such a lack of meaningful activity would then be an unsolved problem” is just playing with words, because the supposed contradiction is due entirely to the fact that the word “problem” means subtly different things in its two uses in that sentence.
Can you explain what those two meanings are?
I don’t mean anything deep by it, just that for example a system might be able to optimize our environment to .99 human-optimal (which is pretty well approximated by the phrase “solving all problems”) and thereby create, say, a pervasive and crippling sense of ennui that it can’t yet resolve (which would constitute a “problem”). There’s no contradiction in that scenario; the illusion of contradiction is created entirely by the sloppiness of language.
I don’t think I follow; if the environment is .99 human-optimal, then that remaining .01 gap implies that there are some problems remain to be solved, however few or minor, right?
It might simply be impossible to solve all problems, because of conflicting dependencies.
Yes, I agree that the remaining .01 gap represents problems that remain to be solved, which implies that “solving all problems” doesn’t literally apply to that scenario. If you’re suggesting that therefore such a scenario isn’t well-enough approximated by the phrase “solving all problems” to justify the phrase’s use, we have different understandings of the level of justification required.
The problem is not that all problems might be solved at some point but that as long as we don’t turn ourselves into something similarly capable as the CEV process, then there exists an oracle that we could ask if we wanted to. The existence of such an oracle is that which is diminishing the value of research and discovery.
I agree that the existence of such an oracle closes off certain avenues of research and discovery.
I’m not sure that keeping those avenues of research and discovery open is particularly important, but I agree that it might be.
If it turns out that the availability such an oracle closing off certain avenues is an important human problem, it seems to follow that any system capable of and motivated to solve human problems will ensure that no such oracle is available.
You seem to be saying that research and discovery has some intrinsic value, in addition to the benefits of actually discovering things and understanding them. If so, what is this value ?
The only answer I can think of is something like, “learning about new avenues of research that the oracle had not yet explored”, but I’m not sure whether that makes sense or not—since the perfect oracle would explore every avenue of research, and an imperfect oracle would strive toward perfection (as long as the oracle is rational).
Well, the process of research and discovery can itself be enjoyable. That said, I don’t feel that there is a need to hold onto our current enjoyable activities if a CEV can create novel superior ways for us to have fun.
I would posit that divergent behaviors and approaches to the norms will still occur, despite the existence of such an oracle just for the sake of imagination, exploration, and the enjoyment of the process itself. Such oracle would also be aware of the existence of unknown future factors, and the benefits of diverse approaches to problems in the face of factors with unknown long term benefits and viability until certain processes has been executed. As you said, such an oracle would then try to explore every avenue of research, while still focusing on the ones deemed most likely to be fruitful. Such oracle should also be good on self-reflection, and able to question its own approaches and the various perspectives it is able to subsume. After all, isn’t self introspection and self reflections part of how one improve themselves?
Then there’s the Fun theory sequence that DSimon have posted about.
Seems like the fun theory sequence answers this question. In summary : if a particular state of affairs turns to be so boring that we regret going in that direction, then cev would prefer another way that builds a more exciting world by I.e. not giving us all the answers or solutions we might momentarily want.
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I am basically alright with that, considering that “artificial problems” would still include social challenge. Much of art and sport and games and competition and the other enjoyable aspects of multi-ape-systems would probably still go on in some form; certainly as I understand the choices available to me I would definitely prefer that they do.
Yes, absolutely I can.
Right now, we write and read lots and lots of fiction about times from the past that we would not like to live in. Or, about variations on those periods with some extra fun stuff (i.e. magic spells, fire-breathing dragons, benevolent non-figurehead monarchies) that are nonetheless not safe or comfortable places to live. It can be very entertaining and engaging to read about worlds that we ourselves would not want to be stuck in, such as a historical-fantasy novel about the year 2000 when people could still die of natural causes, despite having flying broomsticks.
I’ve told you before this seems like a false dichotomy. Did you give a counterargument somewhere that I missed?
Seems to me the situation has an obvious parallel in the world today. And since children would presumably still exist when we start the process, they can serve as more than a metaphor. Now children sometimes want to avoid growing up, but I don’t know of any such case we can’t explain as simple fear of death. That certainly suffices for my own past behavior. And you assume we’ve fixed death through CEV.
It therefore seems like you’re assuming that we’d desire to stifle our children’s curiosity and their desire to grow, rather than letting them become as smart as the FAI and perhaps dragging us along with them. Either that or you have some unstated objection to super-intelligence as a concrete ideal for our future selves to aspire to.
They can be afraid of having to deal with adult responsibilities, or the physical symptoms of aging after they’ve reached their prime.
I believe Eliezer’s discussed this issue before. If I remember correctly, he suggested that a CEV-built FAI might rearrange things slightly and then disappear rather than automatically solving all of our problems for us. Of course, we might be able to create an FAI that doesn’t do that by following a different building plan, but I don’t think that says anything about CEV. I’ll try to find that link...
Edit: I may be looking for a comment, not a top-level post. Here’s some related material, in case I can’t find what I’m looking for.
It doesn’t have to turn itself off, it just has to stop taking requests.
Come to think of it, it doesn’t even have to do that. If I were such an oracle and the world were as you describe it, I might well establish the policy that before I solve problem A on humanity’s behalf, I require that humanity solve problem B on their own behalf.
Sure, B is an artificially created problem, but it’s artificially created by me, and humanity has no choice in the matter.
Or it could even focus on the most pressing problems, and leave stuff around the margins for us to work on. Just because it has vast resources doesn’t mean it has infinite resources.
Is it a terminal value for you that you want to have non-artificial problems in your life? Or is it merely an instrumental value on the path to something else (like “purpose”, or “emotional fulfilment”, or “fun”, or “excitement”, etc)?
I agree that if this were to happen, it seems like a bad thing (I’ll call this the “keeping it real” preference). But it seems like the point when this happens is the point where humanity has the opportunity to create a value-optimizing singleton, not the point where it actually creates one.
In other words, if we could have built an FAI to solve all of our problems for us but didn’t, then any remaining problems are in a sense “artificial”.
But they seem less artificial in that case. And if there is a continuum of possibilities between “no FAI” and “FAI that immediately solves all our problems for us”, then the FAI may be able to strike a happy balance between “solving problems” and “keeping it real”.
That said, I’m not sure how well CEV addresses this. I guess it would treat “keeping it real” to be a human preference and try and satisfy it along with everything else. But it may be that if it even gets to that stage, the ability to “keep it real” has been permanently lost.
Try formalizing the argument and defining variables. I don’t think it will hold together.