An aligned AI doesn’t need to share human preferences-on-reflection. It just needs to (a) be competent, and (b) help humans remain in control of the AI while carrying out whatever reflective process they prefer (including exploration of different approaches, reconciliation of different perspectives, etc.).
So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.
So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.
If you try to formalize (a) and (b), what does that look like, and how would you reach the conclusion that an AI actually has (a) and (b)? My #1 and #2 are things I can come up with when I try to think how we might come to have justified confidence that an AI has good reasoning, and I’m not seeing what other solutions pop up if we say an aligned AI doesn’t need to share human preferences-on-reflection but only needs to be competent and help humans remain in control.
Competence can be an empirical claim, so (a) seems much more straightforward.
Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”? I agree that both of of the properties are hard to formalize or achieve, the second one currently looks easier to me (and may even be a subproblem of the first one—e.g. my current best guess is that good reasoners need a cognitive immune system).
Competence can be an empirical claim, so (a) seems much more straightforward.
Once again I’m having trouble seeing something that you think is straightforward. If an AI can’t determine my values-upon-reflection, that seems like a kind of incompetence. If it can’t do that, it seems likely there are other things it can’t do. Perhaps you can define “competence” in a way that excludes that class of things and argue that’s good enough, but I’m not sure how you’d do that.
Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”?
I think we might eventually be able to argue that a process is normatively correct by understanding it at each of philosophical/mathematical/algorithmic levels, but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment.
If your “argue that a process doesn’t optimize for something ‘bad’” is meant to be analogous to my white-box understanding, it seems similarly inapplicable due to presence of the opaque object in your process. If it’s meant to be analogous to my black-box understanding, I don’t see what the analogy is. In other words, what are you hoping to show instead of “induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment”?
but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment
Suppose I use normatively correct reasoning, but I also use a toaster designed by a normal engineer. The engineer is less capable than I am in every respect, and I watched them design and build the toaster to verify that they didn’t do anything tricky. Then I verified that the toaster does seem to toast toast. But I have no philosophical or mathematical understanding of the toaster-design-process. Your claim seems to be that there are no rational grounds for accepting use of the toaster, other than to argue that accepting it doesn’t change the distribution of outcomes (which probably isn’t true, since it e.g. slightly changes the relative influence of different internal drives by changing what food I eat). Is that right?
What if they designed a SAT solver for me? Or wrote a relativity textbook? Do I need to be sure that nothing like that happens in my deliberative process, in order to have confidence in it?
If those cases don’t seem analogous, can you be more clear about what you mean by “opaque,” or what quantitative factors make an opaque object problematic? So far your argument doesn’t seem rely on any properties of the opaque object at all.
(This case isn’t especially analogous to the deliberative process I’m interested in. I’m bringing it up because I don’t think I yet understand your intuitive dichotomy.)
When you wrote “suppose I use normatively correct reasoning” did you mean suppose you, Paul, use normatively correct reasoning, or suppose you are an AI who uses normatively correct reasoning? I’ll assume the latter for now.
Generally, the AI would use its current reasoning process to decide whether or not to incorporate new objects into itself. I’m not sure what that reasoning process will do exactly, but presumably it would involve something like looking for and considering proofs/arguments/evidence to the effect that incorporating the new object in some specified way will allow it to retain its normatively correct status, or the distribution of outcomes will be sufficiently unchanged.
If an AI uses normatively correct reasoning, the toaster shouldn’t change the distribution of outcomes, since letting food influence its reasoning process is obviously not a normative thing to do, and it should be easy to show that there is no influence. For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives, and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs.
I guess by “opaque” I meant a complex object that wasn’t designed to be easily reasoned about, so it’s very hard to determine whether it has a given property that would be relevant to showing that it can be used safely as part of an AI’s reasoning process. For example a typical microprocessor is an opaque object because it may come with hard to detect flaws and backdoors, whereas a microprocessor designed to be provably correct would be a transparent object.
Suppose that I, Paul, use a toaster or SAT solver or math textbook.
I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:
I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).
I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:
and it should be easy to show that there is no influence
Having new memories will by default change the output of deliberation, won’t it?
For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs
I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)
If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)
One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.
ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?
An aligned AI doesn’t need to share human preferences-on-reflection. It just needs to (a) be competent, and (b) help humans remain in control of the AI while carrying out whatever reflective process they prefer (including exploration of different approaches, reconciliation of different perspectives, etc.).
So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.
If you try to formalize (a) and (b), what does that look like, and how would you reach the conclusion that an AI actually has (a) and (b)? My #1 and #2 are things I can come up with when I try to think how we might come to have justified confidence that an AI has good reasoning, and I’m not seeing what other solutions pop up if we say an aligned AI doesn’t need to share human preferences-on-reflection but only needs to be competent and help humans remain in control.
Competence can be an empirical claim, so (a) seems much more straightforward.
Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”? I agree that both of of the properties are hard to formalize or achieve, the second one currently looks easier to me (and may even be a subproblem of the first one—e.g. my current best guess is that good reasoners need a cognitive immune system).
Once again I’m having trouble seeing something that you think is straightforward. If an AI can’t determine my values-upon-reflection, that seems like a kind of incompetence. If it can’t do that, it seems likely there are other things it can’t do. Perhaps you can define “competence” in a way that excludes that class of things and argue that’s good enough, but I’m not sure how you’d do that.
I think we might eventually be able to argue that a process is normatively correct by understanding it at each of philosophical/mathematical/algorithmic levels, but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment.
If your “argue that a process doesn’t optimize for something ‘bad’” is meant to be analogous to my white-box understanding, it seems similarly inapplicable due to presence of the opaque object in your process. If it’s meant to be analogous to my black-box understanding, I don’t see what the analogy is. In other words, what are you hoping to show instead of “induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment”?
Suppose I use normatively correct reasoning, but I also use a toaster designed by a normal engineer. The engineer is less capable than I am in every respect, and I watched them design and build the toaster to verify that they didn’t do anything tricky. Then I verified that the toaster does seem to toast toast. But I have no philosophical or mathematical understanding of the toaster-design-process. Your claim seems to be that there are no rational grounds for accepting use of the toaster, other than to argue that accepting it doesn’t change the distribution of outcomes (which probably isn’t true, since it e.g. slightly changes the relative influence of different internal drives by changing what food I eat). Is that right?
What if they designed a SAT solver for me? Or wrote a relativity textbook? Do I need to be sure that nothing like that happens in my deliberative process, in order to have confidence in it?
If those cases don’t seem analogous, can you be more clear about what you mean by “opaque,” or what quantitative factors make an opaque object problematic? So far your argument doesn’t seem rely on any properties of the opaque object at all.
(This case isn’t especially analogous to the deliberative process I’m interested in. I’m bringing it up because I don’t think I yet understand your intuitive dichotomy.)
When you wrote “suppose I use normatively correct reasoning” did you mean suppose you, Paul, use normatively correct reasoning, or suppose you are an AI who uses normatively correct reasoning? I’ll assume the latter for now.
Generally, the AI would use its current reasoning process to decide whether or not to incorporate new objects into itself. I’m not sure what that reasoning process will do exactly, but presumably it would involve something like looking for and considering proofs/arguments/evidence to the effect that incorporating the new object in some specified way will allow it to retain its normatively correct status, or the distribution of outcomes will be sufficiently unchanged.
If an AI uses normatively correct reasoning, the toaster shouldn’t change the distribution of outcomes, since letting food influence its reasoning process is obviously not a normative thing to do, and it should be easy to show that there is no influence. For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives, and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs.
I guess by “opaque” I meant a complex object that wasn’t designed to be easily reasoned about, so it’s very hard to determine whether it has a given property that would be relevant to showing that it can be used safely as part of an AI’s reasoning process. For example a typical microprocessor is an opaque object because it may come with hard to detect flaws and backdoors, whereas a microprocessor designed to be provably correct would be a transparent object.
(Does that help?)
Suppose that I, Paul, use a toaster or SAT solver or math textbook.
I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:
I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).
I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:
Having new memories will by default change the output of deliberation, won’t it?
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)
If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)
One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.
ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)
Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?