A question for Eliezer: If you were superintelligent, would you destroy the world? If not, why not?
If your answer is “yes” and the same would be true for me and everyone else for some reason I don’t understand, then we’re probably doomed. If it is “no” (or even just “maybe”), then there must be something about the way we humans think that would prevent world destruction even if one of us were ultra-powerful. If we can understand that and transfer it to an AGI, we should be able to prevent destruction, right?
I would “destroy the world” from the perspective of natural selection in the sense that I would transform it in many ways, none of which were making lots of copies of my DNA, or the information in it, or even having tons of kids half resembling my old biological self.
From the perspective of my highly similar fellow humans with whom I evolved in context, they’d get nice stuff, because “my fellow humans get nice stuff” happens to be the weird unpredictable desire that I ended up with at the equilibrium of reflection on the weird unpredictable godshatter that ended up inside me, as the result of my being strictly outer-optimized over millions of generations for inclusive genetic fitness, which I now don’t care about at all.
Paperclip-numbers do well out of paperclip-number maximization. The hapless outer creators of the thing that weirdly ends up a paperclip maximizer, not so much.
“my fellow humans get nice stuff” happens to be the weird unpredictable desire that I ended up with at the equilibrium of reflection on the weird unpredictable godshatter that ended up inside me
This may not be what evolution had “in mind” when it created us. But couldn’t we copy something like this into a machine so that it “thinks” of us (and our descendants) as its “fellow humans” who should “get nice stuff”? I understand that we don’t know how to do that yet. But the fact that Eliezer has some kind of “don’t destroy the world from a fellow human perspective” goal function inside his brain seems to mean a) that such a function exists and b) that it can be coded into a neuronal network, right?
I was also thinking about the specific way we humans weigh competing goals and values against each other. So while for instance we do destroy much of the biosphere by blindly pursuing our misaligned goals, some of us still care about nature and animal welfare and rain forests, and we may even be able to prevent total destruction of them.
I see how my above question seems naive. Maybe it is. But if one potential answer to the alignment problem lies in the way our brains work, maybe we should try to understand that better, instead of (or in addition to) letting a machine figure it out for us through some kind of “value learning”. (Copied from my answer to AprilSR:) I stumbled across two papers from a few years ago by a psychologist, Mark Muraven, who thinks that the way humans deal with conflicting goals could be important for AI alignment (https://arxiv.org/abs/1701.01487 and https://arxiv.org/abs/1703.06354).They appear a bit shallow to me and don’t contain any specific ideas on how to implement this. But maybe Muraven has a point here.
But if one potential answer to the alignment problem lies in the way our brains work, maybe we should try to understand that better, instead of (or in addition to) letting a machine figure it out for us through some kind of “value learning”.
Ah, I see. You might be interested in this sequence then!
Yes. But my impression so far is that anything we can even imagine in terms of a goal function will go badly wrong somehow. So I find it a bit reassuring that at least one such function that will not necessarily lead to doom seems to exist, even if we don’t know how to encode it yet.
I guess there’s some meta-level question here that I’m interested in, as a sort of elaboration, which is something like: how do you go about balancing which meta-levels of the world to satisfy and which to destroy? [I kind of have a sense that Eliezer’s answer can be guessed as an extension of the meta-ethics sequence, and so am interested both in his actual answer and other people’s answers.]
For example, one might imagine a mostly-upload situation like The Metamorphosis of Prime Intellect / Friendship is Optimal / Second Life / etc., wherein everyone gets a materially abundant digital life in their shard of the metaverse, with communication heavily constrained (if nothing else, by requiring mutual consent). This, of course, discards as no-longer-relevant entities that exist on higher meta-levels; nations will be mostly irrelevant in such a world, companies will mostly stop existing, and so on.
But one could also apply the same logic a level lower. If you take Internal Family Systems / mental modules seriously, humans don’t look like atomic objects, they look like a collection of simpler subagents balanced together in a sort of precarious way. (One part of you wants to accumulate lots of fat to survive the winter, another part of you wants to not accumulate lots of fat to look attractive to mates, the thing the ‘human’ is doing is balancing between those parts.) And so you can imagine a superintelligent system out to do right by the mental modules ‘splitting them apart’ in order to satisfy them separately, with one part swimming in a vat of glucose and the other inhabiting a beautiful statue, and discarding the ‘balancing between the parts’ system as no-longer-relevant.
Of course, applying this logic a level higher—the things to preserve are communities/nations/corporations/etc.--seems like it can quite easily be terrible for the people involved, and feels like it’s preserving problems in order to maintain the relevance of traditional solutions.
My nonexpert guess would be that the “superintelligent brain emulation” class of solutions has potential to work. But we’d still need to figure out how to prevent an AGI from being made until we’re ready to actually implement that solution.
My hope was that maybe we can recreate the way we humans make beneficial decisions for fellow beings without simulating a complete brain. But I agree that AGI might be built before we have solved this.
I think doing that via, like, reinforcement learning, is well-established as a possible strategy and discarded because it probably won’t generate the properties we want.
Maybe we could solve legibility enough to extract the value-assessing part out of a human brain, and then put it on a computer? This doesn’t strike me as a solution but it might be a useful idea.
I was thinking more about the way psychologists try to understand the way we make decisions. I stumbled across two papers from a few years ago by such a psychologist, Mark Muraven, who thinks that the way humans deal with conflicting goals could be important for AI alignment (https://arxiv.org/abs/1701.01487 and https://arxiv.org/abs/1703.06354).They appear a bit shallow to me and don’t contain any specific ideas on how to implement this. But maybe Muraven has a point here. Maybe we should put more effort into understanding the way we humans deal with goals, instead of letting an AI figure it out for itself through RL or IRL.
A question for Eliezer: If you were superintelligent, would you destroy the world? If not, why not?
If your answer is “yes” and the same would be true for me and everyone else for some reason I don’t understand, then we’re probably doomed. If it is “no” (or even just “maybe”), then there must be something about the way we humans think that would prevent world destruction even if one of us were ultra-powerful. If we can understand that and transfer it to an AGI, we should be able to prevent destruction, right?
I would “destroy the world” from the perspective of natural selection in the sense that I would transform it in many ways, none of which were making lots of copies of my DNA, or the information in it, or even having tons of kids half resembling my old biological self.
From the perspective of my highly similar fellow humans with whom I evolved in context, they’d get nice stuff, because “my fellow humans get nice stuff” happens to be the weird unpredictable desire that I ended up with at the equilibrium of reflection on the weird unpredictable godshatter that ended up inside me, as the result of my being strictly outer-optimized over millions of generations for inclusive genetic fitness, which I now don’t care about at all.
Paperclip-numbers do well out of paperclip-number maximization. The hapless outer creators of the thing that weirdly ends up a paperclip maximizer, not so much.
This may not be what evolution had “in mind” when it created us. But couldn’t we copy something like this into a machine so that it “thinks” of us (and our descendants) as its “fellow humans” who should “get nice stuff”? I understand that we don’t know how to do that yet. But the fact that Eliezer has some kind of “don’t destroy the world from a fellow human perspective” goal function inside his brain seems to mean a) that such a function exists and b) that it can be coded into a neuronal network, right?
I was also thinking about the specific way we humans weigh competing goals and values against each other. So while for instance we do destroy much of the biosphere by blindly pursuing our misaligned goals, some of us still care about nature and animal welfare and rain forests, and we may even be able to prevent total destruction of them.
I think we (mostly) all agree that we want to somehow encode human values into AGIs. That’s not a new idea. The devil is in the details.
I see how my above question seems naive. Maybe it is. But if one potential answer to the alignment problem lies in the way our brains work, maybe we should try to understand that better, instead of (or in addition to) letting a machine figure it out for us through some kind of “value learning”. (Copied from my answer to AprilSR:) I stumbled across two papers from a few years ago by a psychologist, Mark Muraven, who thinks that the way humans deal with conflicting goals could be important for AI alignment (https://arxiv.org/abs/1701.01487 and https://arxiv.org/abs/1703.06354).They appear a bit shallow to me and don’t contain any specific ideas on how to implement this. But maybe Muraven has a point here.
I think your question is excellent. “How does the single existing kind of generally intelligent agent form its values?” is one of the most important and neglected questions in all of alignment, I think.
Ah, I see. You might be interested in this sequence then!
Yes, thank you!
Yes. But my impression so far is that anything we can even imagine in terms of a goal function will go badly wrong somehow. So I find it a bit reassuring that at least one such function that will not necessarily lead to doom seems to exist, even if we don’t know how to encode it yet.
I guess there’s some meta-level question here that I’m interested in, as a sort of elaboration, which is something like: how do you go about balancing which meta-levels of the world to satisfy and which to destroy? [I kind of have a sense that Eliezer’s answer can be guessed as an extension of the meta-ethics sequence, and so am interested both in his actual answer and other people’s answers.]
For example, one might imagine a mostly-upload situation like The Metamorphosis of Prime Intellect / Friendship is Optimal / Second Life / etc., wherein everyone gets a materially abundant digital life in their shard of the metaverse, with communication heavily constrained (if nothing else, by requiring mutual consent). This, of course, discards as no-longer-relevant entities that exist on higher meta-levels; nations will be mostly irrelevant in such a world, companies will mostly stop existing, and so on.
But one could also apply the same logic a level lower. If you take Internal Family Systems / mental modules seriously, humans don’t look like atomic objects, they look like a collection of simpler subagents balanced together in a sort of precarious way. (One part of you wants to accumulate lots of fat to survive the winter, another part of you wants to not accumulate lots of fat to look attractive to mates, the thing the ‘human’ is doing is balancing between those parts.) And so you can imagine a superintelligent system out to do right by the mental modules ‘splitting them apart’ in order to satisfy them separately, with one part swimming in a vat of glucose and the other inhabiting a beautiful statue, and discarding the ‘balancing between the parts’ system as no-longer-relevant.
Of course, applying this logic a level higher—the things to preserve are communities/nations/corporations/etc.--seems like it can quite easily be terrible for the people involved, and feels like it’s preserving problems in order to maintain the relevance of traditional solutions.
My nonexpert guess would be that the “superintelligent brain emulation” class of solutions has potential to work. But we’d still need to figure out how to prevent an AGI from being made until we’re ready to actually implement that solution.
My hope was that maybe we can recreate the way we humans make beneficial decisions for fellow beings without simulating a complete brain. But I agree that AGI might be built before we have solved this.
I think doing that via, like, reinforcement learning, is well-established as a possible strategy and discarded because it probably won’t generate the properties we want.
Maybe we could solve legibility enough to extract the value-assessing part out of a human brain, and then put it on a computer? This doesn’t strike me as a solution but it might be a useful idea.
I was thinking more about the way psychologists try to understand the way we make decisions. I stumbled across two papers from a few years ago by such a psychologist, Mark Muraven, who thinks that the way humans deal with conflicting goals could be important for AI alignment (https://arxiv.org/abs/1701.01487 and https://arxiv.org/abs/1703.06354).They appear a bit shallow to me and don’t contain any specific ideas on how to implement this. But maybe Muraven has a point here. Maybe we should put more effort into understanding the way we humans deal with goals, instead of letting an AI figure it out for itself through RL or IRL.