I tried to solve the problem and found that I thought it was very hard to make the sort of substantial progress that would meaningfully bridge the gap from our current epistemic/philosophical state to the state where the problem is largely solved. I did make incremental progress, but not the sort of incremental progress I saw as attacking the really hard problems. Towards the later parts of my work at MIRI, I was doing research that seemed to be largely overlapping with complex systems theory (in order to reason about how to align autopoietic systems similar to evolution) in a way that made it hard to imagine that I’d come up with useful crisp formal definitions/proofs/etc.
This seems a bit low, given that there’s a number of disjunctive ways that it could happen.
I feel like saying 2% now. Not sure what caused the update.
I’m pretty worried that such technology will accelerate value drift within the current autopoietic system.
I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)
I hope you stay engaged with the AI risk discussions and maintain your credibility. I’m really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don’t think the problem is that hard.
I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)
I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.
You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything).
The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven’t totally worked this out.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
It’s hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.
defending against this type of technology does seem to require solving hard philosophical problems
Why is this?
The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).
It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.
It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.
How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?
The case you describe seems clearly contrary to my preferences about how I should reflect.
How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I’d be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you’re thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn’t think the same problem applies in the non-autopoietic automation scenario.
figure out what my values actually are / should be
I think human ideas are like low resolution pictures. Sometimes they show simple things, like circles, so we can make a high resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a high resolution picture of it is an underspecified problem. I fear that figuring out my values might be that kind of problem.
So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that simply don’t require it at any stage. That was my motivation for this post, which relies on using our “low resolution picture” to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.
I think I understand what you’re saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn’t be well served by what you’re suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.
While discussing the VR utopia post, you wrote “I know you want to use philosophy to extend the domain, but I don’t trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations.” I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we’ve made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.
Of course it’s also possible that’s not the case, and we can’t do better than to optimize the future using our current “low resolution” values, but until we’re a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.
I tried to solve the problem and found that I thought it was very hard to make the sort of substantial progress that would meaningfully bridge the gap from our current epistemic/philosophical state to the state where the problem is largely solved. I did make incremental progress, but not the sort of incremental progress I saw as attacking the really hard problems. Towards the later parts of my work at MIRI, I was doing research that seemed to be largely overlapping with complex systems theory (in order to reason about how to align autopoietic systems similar to evolution) in a way that made it hard to imagine that I’d come up with useful crisp formal definitions/proofs/etc.
I feel like saying 2% now. Not sure what caused the update.
I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)
I hope you stay engaged with the AI risk discussions and maintain your credibility. I’m really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don’t think the problem is that hard.
I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.
Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything). The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven’t totally worked this out.
It’s hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.
Why is this?
The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).
It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.
How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?
How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I’d be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you’re thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn’t think the same problem applies in the non-autopoietic automation scenario.
I think human ideas are like low resolution pictures. Sometimes they show simple things, like circles, so we can make a high resolution picture of the same circle. That’s known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a high resolution picture of it is an underspecified problem. I fear that figuring out my values might be that kind of problem.
So apart from hoping to define a “full resolution picture” of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that simply don’t require it at any stage. That was my motivation for this post, which relies on using our “low resolution picture” to describe some particular nice future without considering all possible ones. It’s certainly flawed, but there might be other similar ideas.
Does that make sense?
I think I understand what you’re saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn’t be well served by what you’re suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.
While discussing the VR utopia post, you wrote “I know you want to use philosophy to extend the domain, but I don’t trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations.” I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we’ve made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.
Of course it’s also possible that’s not the case, and we can’t do better than to optimize the future using our current “low resolution” values, but until we’re a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.