Suppose that I, Paul, use a toaster or SAT solver or math textbook.
I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:
I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).
I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:
and it should be easy to show that there is no influence
Having new memories will by default change the output of deliberation, won’t it?
For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs
I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)
If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)
One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.
ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?
Suppose that I, Paul, use a toaster or SAT solver or math textbook.
I’m happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:
I’m happy trusting future Paul’s reasoning (in particular I do not consider it a top altruistic priority to find a way to avoid trusting future Paul’s reasoning)
That remains true even though future Paul would happily use an opaque toaster or textbook (under the conditions described).
I’m not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:
Having new memories will by default change the output of deliberation, won’t it?
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
I don’t see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I’m gesturing at (by taking bad === “manipulating or biasing its outputs.”)
If we’re talking about you, Paul, then what’s different is that since you don’t have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We’d happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn’t change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn’t make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you’d reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that’s not eventually recoverable.)
One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we’ll need much stronger assurances if we’re going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.
ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution’s long run safety can’t be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don’t understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)
Not sure if this is still relevant to the current interpretation of your question, but couldn’t you use it to safely break encryption schemes, at least?