I’m inclined to think that in most scenarios the first AGI wins anyway.
I was thinking of meeting alien AIs, post-Singularity.
And leaving solving decision theory to the AGI could mean you get to build it earlier.
Huh? I thought we were supposed to be the good guys here? ;-)
But seriously, “sacrifice safety for speed” is the “defect” option in the game of “let’s build AGI”. I’m not sure how to get the C/C outcome (or rather C/C/C/...), but it seems too early to start talking about defecting already.
Besides, CDT is not well defined enough that you can implement it even if you wanted to. I think if you were forced to implement a “good enough” decision theory and hope for the best, you’d pick UDT at this point. (UDT is also missing a big chunk from its specifications, namely the “math intuition module” but I think that problem has to be solved anyway. It’s hard to see how an AGI can get very far without being able to deal with logical/mathematical uncertainty.)
I was thinking of meeting alien AIs, post-Singularity.
What pre-singularity actions are you worried about them taking?
Huh? I thought we were supposed to be the good guys here? ;-)
What I was thinking was that a CDT-seeded AI might actually be safer precisely because it won’t try to change pre-Singularity events, and if it’s first the new decision theory will be in place in time for any post-Singularity events.
Besides, CDT is not well defined enough that you can implement it even if you wanted to.
That’s surprising to me—what should I read in order to understand this point better? EDIT: strike that, you answer that above.
What pre-singularity actions are you worried about them taking?
They could modify themselves so that if they ever encounter a CDT-descended AI they’ll start a war (even if it means mutual destruction) unless the CDT-descended AI gives them 99% of its resources.
They could modify themselves so that if they ever encounter a CDT-descended AI they’ll start a war (even if it means mutual destruction) unless the CDT-descended AI gives them 99% of its resources.
They could also modify themselves to make the analogous threat if they encounter a UDT-descended AI, or a descendant of an AI designed by TIm Freeman, or a descendant of an AI designed by Wei Dai, or a descendant of an AI designed using ideas mentioned on LessWrong. I would hope that any of those AI’s would hand over 99% of their resources if the extortionist could prove its source code and prove that war would be worse. I assume you’re saying that CDT is special in this regard. How is it special?
(Thanks for the pointer to the James Joyce book, I’ll have a look at it.)
I assume you’re saying that CDT is special in this regard. How is it special?
If the alien AI computes the expected utility of “provably modify myself to start a war against CDT-AI unless it gives me 99% of its resources”, it’s certain to get a high value, whereas if it computes the expected utility of “provably modify myself to start a war against UDT-AI unless it gives me 99% of its resources” it might possibly get a low value (not sure because UDT isn’t fully specified), because the UDT-AI, when choosing what to do when faced with this kind of threat, would take into account the logical correlation between its decision and the alien AI’s prediction of its decision.
...if it computes the expected utility of “provably modify myself to start a war against UDT-AI unless it gives me 99% of its resources” it might possibly get a low value (not sure because UDT isn’t fully specified), because the UDT-AI, when choosing what to do when faced with this kind of threat, would take into account the logical correlation between its decision and the alien AI’s prediction of its decision.
Well, that’s plausible. I’ll have to work through some UDT examples to understand fully.
What model do you have of how entity X can prove to entity Y that X is running specific source code?
The proof that I can imagine is entity Y gives some secure hardware Z to X, and then X allows Z to observe the process of X self-modifying to run the specified source code, and then X gives the secure hardware back to Y. Both X and Y can observe the creation of Z, so Y can know that it’s secure and X can know that it’s a passive observer rather than a bomb or something.
This model breaks the scenario, since a CDT playing the role of Y could self-modify any time before it hands over Z and play the game competently.
Now, if there’s some way for X to create proofs of X’s source code that will be convincing to Y without giving advance notice to Y, I can imagine a problem for Y here. Does anyone know how to do that?
(I acknowledge that if nobody knows how to do that, that means we don’t know how to do that, not that it can’t be done.)
Hmm, this explains my aversion to knowing the details of what other people are thinking. It can put me at a disadvantage in negotiations unless I am able to lie convincingly and say I do not know.
I think I″ll stop here for now, because you already seem intrigued enough to want to learn about UDT in detail. I’m guessing that once you do, you won’t be so motivated to think up reasons why CDT isn’t really so bad. :) Let me know if that turns out not to be the case though.
What model do you have of how entity X can prove to entity Y that X is running specific source code?
On second thought, I should answer this question because it’s of independent interest. If Y is sufficiently powerful, it may be able to deduce the laws of physics and the initial conditions of the universe, and then obtain X’s source code by simulating the universe up to when X is created. Note that Y may do this not because it wants to know X’s source code in some anthropomorphic sense, but simply due to how its decision-making algorithm works.
If Y is sufficiently powerful, it may be able to deduce the laws of physics and the initial conditions of the universe, and then obtain X’s source code by simulating the universe up to when X is created.
Unless there have been some specific assumptions made about the universe that will not work. Simulating the entire universe does not tell Y which part of the universe it inhabits. It will give Y a set of possible parts of the universe which match Y’s observations. While the simulation strategy will allow the best possible prediction about what X’s source code is given what Y already knows it does not give evidence to Y that it didn’t already have.
You’re right, the model assumes that we live in a universe such that superintelligent AIs would “naturally” have enough evidence to infer the source code of other AIs. (That seems quite plausible, although by no means certain, to me.) Also, since this is a thread about the relative merits of CDT, I should point out that there are some games in which CDT seems to win relative to TDT or UDT, which is a puzzle that is still open.
Also, since this is a thread about the relative merits of CDT, I should point out that there are some games in which CDT seems to win relative to TDT or UDT, which is a puzzle that is still open.
It’s an interesting problem, but my impression when reading was somewhat similar to that of Eliezer in the replies. At the core it is the question of “How do you deal with constructs made by other agents?” I don’t think TDT has any particular weakness there.
If Y is sufficiently powerful, it may be able to deduce the laws of physics and the initial conditions of the universe, and then obtain X’s source code by simulating the universe up to when X is created.
Quantum mechanics seems to be pretty clear that true random number generators are available, and probably happen naturally. I don’t understand why you consider that scenario probable enough to be worth talking about.
It’s hard to see how an AGI can get very far without being able to deal with logical/mathematical uncertainty.
Do you have an intuition as to how it would do this without contradicting itself? I tried to ask a similar question but got it wrong in the first draft and afaict did not receive an answer to the relevant part.
I just want to know if my own intuition fails in the obvious way.
Besides, CDT is not well defined enough that you can implement it even if you wanted to. I think if you were forced to implement a “good enough” decision theory and hope for the best, you’d pick UDT at this point.
Really? That’s surprising. My assumption had been that CDT would be much simpler to implement—but just give undesirable outcomes in whole classes of circumstance.
CDT uses a “causal probability function” to evaluate the expected utilities of various choices, where this causal probability function is different from the epistemic probability function you use to update beliefs. (In EDT they are one and the same.) There is no agreement amongst CDT theorists how to formulate this function, and I’m not aware of any specific proposal that can be straightforwardly implemented. For more details see James Joyce’s The foundations of causal decision theory.
There is no agreement amongst CDT theorists how to formulate this function, and I’m not aware of any specific proposal that can be straightforwardly implemented.
I understand AIXI reasonably well and had assumed it was a specific implementation of CDT, perhaps with some tweaks so the reward values are generated internally instead of being observed in the environment. Perhaps AIXI isn’t close to an implementation of CDT, perhaps it’s perceived as not specific or straightforward enough, or perhaps it’s not counted as an implementation. Why isn’t AIXI a counterexample?
You may be right that AIXI can be thought of as an instance of CDT. Hutter himself cites “sequential decision theory” from a 1957 paper which certainly predates CDT, but CDT is general enough that SDT could probably fit into its formalism. (Like EDT can be considered an instance of CDT with the causal probability function set to be the same as the epistemic probability function.) I guess I hadn’t considered AIXI as a serious candidate due to its other major problems.
The first one is the claim that AIXI wouldn’t have a proper understanding of its body because its thoughts are defined mathematically. This is just wrong, IMO; my refutation, for a machine that’s similar enough to AIXI for this issue to work the same, is here. Nobody has engaged me in serious conversation about that, so I don’t know how well it will stand up. (If I’m right on this, then I’ve seen Eliezer, Tim Tyler, and you make the same error. What other false consensuses do we have?)
The second one is fixed if we do the tweak I mentioned in the grandparent of this comment.
If you take the fix described above for the second one, what’s left of the third one is the claim that instantaneous human (or AI) experience is too nuanced to fit in a single cell of a Turing machine. According to the original paper, page 8, the symbols on the reward tape are drawn from an alphabet R of arbitrary but fixed size. All you need is a very large alphabet and this one goes away.
I agree with the facts asserted in Tyler’s fourth problem, but I do not agree that it is a problem. He’s saying that Kolmogorov complexity is ill-defined because the programming language used is undefined. I agree that rational agents might disagree on priors because they’re using different programming languages to represent their explanations. In general, a problem may have multiple solutions. Practical solutions to the problems we’re faced with will require making indefensible arbitrary choices of one potential solution over another. Picking the programming language for priors is going to be one of those choices.
The first one is the claim that AIXI wouldn’t have a proper understanding of its body because its thoughts are defined mathematically. This is just wrong, IMO; my refutation, for a machine that’s similar enough to AIXI for this issue to work the same, is here.
I don’t see how your refutation applies to AIXI. Let me just try to explain in detail why I think AIXI will not properly protect its body. Consider an AIXI that arises in a simple universe, i.e., one computed by a short program P. AIXI has a probability distribution not over universes, but instead over environments where an environment is a TM whose output tape is AIXI’s input tape and whose input tape is AIXI’s output tape. What’s the simplest environment that fits AIXI’s past inputs/outputs? Presumably it’s E = P plus some additional pieces of code that injects E’s inputs into where AIXI’s physical output ports are located in the universe (that is, overrides the universe’s natural evolution using E’s inputs), and extracts E’s outputs from where AIXI’s physical input ports are located.
What happens when AIXI considers an action that destroys its physical body in the universe computed by P? As long as the input/output ports are not also destroyed, AIXI would expect that the environment E (with its “supernatural” injection/extraction code) will continue to receive its outputs and provide it with inputs.
Consider an AIXI that arises in a simple universe, i.e., one computed by a short program P.
An implementation of AIXI would be fairly complex. If P is too simple, then AIXI could not really have a body in the universe, so it would be correct in guessing that some irregularity in the laws of physics was causing its behaviors to be spliced into the behavior of the world.
However, if AIXI has observed enough of the inner workings of other similar machines, or enough of the laws of physics in general, or enough of its own inner workings, the simplest model will be that AIXI’s outputs really do emerge from the laws of physics in the real universe, since we are assuming that that is indeed the case and that Kolmogorov induction eventually works. At that point, imagining that AIXI’s behaviors are a consequence of a bunch of exceptions to the laws of physics is just extra complexity and won’t be part of the simplest hypothesis. It will be part of some less likely hypotheses, and the AI would have to take that risk into account when deciding whether to self-improve.
Tim, I think you’re probably not getting my point about the distinction between our concept of a computable universe, and AIXI’s formal concept of a computable environment. AIXI requires that the environment be a TM whose inputs match AIXI’s past outputs and whose outputs match AIXI’s past inputs. A candidate environment must have the additional code to inject/extract those inputs/outputs and place them on the input/output tapes, or AIXI will exclude it from its expected utility calculations.
The candidate environment must have the additional code to inject/extract those inputs/outputs and place them on the input/output tapes, or AIXI will exclude it from its expected utility calculations.
I agree that the candidate environment will need to have code to handle the inputs. However, if the candidate environment can compute the outputs on its own, without needing to be given the AI’s outputs, the candidate environment does not need code to inject the AI’s outputs into it.
Even if the AI can only partially predict its own behavior based on the behavior of the hardware it observes in the world, it can use that information to more efficiently encode its outputs in the candidate environment, so it can have some understanding of its position in the world even without being able to perfectly predict its own behavior from first principles.
If the AI manages to destroy itself, it will expect its outputs to be disconnected from the world and have no consequences, since anything else would violate its expectations about the laws of physics.
This back-and-forth appears to be useless. I should probably do some Python experiments and we then can change this from a debate to a programming problem, which would be much more pleasant.
However, if the candidate environment can compute the outputs on its own, without needing to be given the AI’s outputs, the candidate environment does not need code to inject the AI’s outputs into it.
If a candidate environment has no special code to inject AIXI’s outputs, then when AIXI computes expected utilities, it will find that all actions have equal utility in that environment, so that environment will play no role in its decisions.
I should probably do some Python experiments and we then can change this from a debate to a programming problem, which would be much more pleasant.
Ok, but try not to destroy the world while you’re at it. :) Also, please take a closer look at UDT first. Again, I think there’s a strong possibility that you’ll end up thinking “why did I waste my time defending CDT/AIXI?”
FYI, generating reward values internally—instead of them being observed in the environment—makes no difference whatsoever to the wirehead problem.
AIXI digging into its brains with its own mining claws is quite plausible. It won’t reason as you suggest—since it has no idea that it is instantiated in the real world. So, its exploratory mining claws may plunge in. Hopefully it will get suitably negatively reinforced for that—though much will depend on which part of its brain it causes damage too. It could find that ripping out its own inhibition circuits is very rewarding.
A larger set of symbols for rewards makes no difference—since the reward signal is a scalar. If you compare with an animal, that has millions of pain sensors that operate in parallel. The animal is onto something there—something to do with a-priori knowledge about the common causes of pain. Having lots of pain sensors has positive aspects—e.g. it saves you experimenting to figure out what hurts.
As for the reference machine issue, I do say: “This problem is also not very serious.”
Not very serious unless you are making claims about your agent being “the most intelligent unbiased agent possible”. Then this kind of thing starts to make a difference...
A larger set of symbols for rewards makes no difference—since the reward signal is a scalar. If you compare with an animal, that has millions of pain sensors that operate in parallel. The animal is onto something there—something to do with a-priori knowledge about the common causes of pain. Having lots of pain sensors has positive aspects—e.g. it saves you experimenting to figure out what hurts.
You can encode 16 64 bit integers in a 1024 bit integer. The scalar/parallel distinction is bogus.
(Edit: I original wrote “5 32 bit integers” when I meant “2**5 32 bit integers”. Changed to “16 64 bit integers” because “32 32 bit integers” looked too much like a typo.)
Not very serious unless you are making claims about your agent being “the most intelligent unbiased agent possible”. Then this kind of thing starts to make a difference...
Strawman argument. The only claim made is that it’s the most intelligent up to a constant factor, and a bunch of other conditions are thrown in. When Hutter’s involved, you can bet that some of the constant factors are large compared to the size of the universe.
You can encode 5 32 bit integers in a 1024 bit integer. The scalar/parallel distinction is bogus.
Er, not if you are adding the rewards together and maximising the results, you can’t! That is exactly what happens to the rewards used by AIXI.
Not very serious unless you are making claims about your agent being “the most intelligent unbiased agent possible”. Then this kind of thing starts to make a difference...
Strawman argument. The only claim made is that it’s the most intelligent up to a constant factor, and a bunch of other conditions are thrown in.
Actually Hutter says this sort of thing all over the place (I was quoting him above) - and it seems pretty irritating and misleading to me. I’m not saying the claims he makes in the fine print are wrong, but rather that the marketing headlines are misleading.
You can encode 5 32 bit integers in a 1024 bit integer. The scalar/parallel distinction is bogus.
Er, not if you are adding the rewards together and maximising the results, you can’t! That is exactly what happens to the rewards used by AIXI.
You’re right there, I’m confusing AIXI with another design I’ve been working with in a similar idiom. For AIXI to work, you have to combine together all the environmental stuff and compute a utility, make the code for doing the combining part of the environment (not the AI), and then use that resulting utility as the input to AIXI.
For more details see James Joyce’s The foundations of causal decision theory.
Thankyou for the reference, and the explanation.
I am prompted to ask myself a question analogous to the one Eliezer recently asked:
Doesn’t have a name as far as I know. But I’m not sure it deserves one; would CDT really be a probable output anywhere besides a verbal theory advocated by human philosophers in our own Everett branch? Maybe, now that I think about it, but even so, does it matter?
Is it worth my while exploring the details of CDT formalization beyond just the page you linked to? There seems to be some advantage to understanding the details and conventions of how such concepts are described. At the same time revising CDT thinking in too much detail may eliminate some entirely justifiable confusion as to why anyone would think it is a good idea! “Causal Expected Utiluty”? “Causal Tendencies”? What the? I only care about what will get me the best outcome!
Is it worth my while exploring the details of CDT formalization beyond just the page you linked to?
Probably not. I only learned it by accident myself. I had come up with a proto-UDT that was motivated purely by anthropic reasoning paradoxes (as opposed to Newcomb-type problems like CDT and TDT), and wanted to learn how existing decision theories were formalized so I could do something similar. James Joyce’s book was the most prominent such book available at the time.
ETA: Sorry, I think the above is probably not entirely clear or helpful. It’s a bit hard for me to put myself in your position and try to figure out what may or may not be worthwhile for you. The fact is that Joyce’s book is the decision theory book I read, and quite possibly it influenced me more than I realize, or is more useful for understanding the motivation for or the formulation of UDT than I think. It couldn’t hurt to grab a copy of it and read a few chapters to see how useful it is to you.
Thanks for the edit/update. For reference it may be worthwhile to make such additions as a new comment, either as a reply to yourself or the parent. It was only by chance that I spotted the new part!
I was thinking of meeting alien AIs, post-Singularity.
What pre-singularity actions are you worried about them taking?
Huh? I thought we were supposed to be the good guys here? ;-)
What I was thinking was that a CDT-seeded AI might actually be safer precisely because it won’t try to change pre-Singularity events, and if it’s first the new decision theory will be in place in time for any post-Singularity events.
Besides, CDT is not well defined enough that you can implement it even if you wanted to.
That’s surprising to me—what should I read in order to understand this point better?
I was thinking of meeting alien AIs, post-Singularity.
Huh? I thought we were supposed to be the good guys here? ;-)
But seriously, “sacrifice safety for speed” is the “defect” option in the game of “let’s build AGI”. I’m not sure how to get the C/C outcome (or rather C/C/C/...), but it seems too early to start talking about defecting already.
Besides, CDT is not well defined enough that you can implement it even if you wanted to. I think if you were forced to implement a “good enough” decision theory and hope for the best, you’d pick UDT at this point. (UDT is also missing a big chunk from its specifications, namely the “math intuition module” but I think that problem has to be solved anyway. It’s hard to see how an AGI can get very far without being able to deal with logical/mathematical uncertainty.)
What pre-singularity actions are you worried about them taking?
What I was thinking was that a CDT-seeded AI might actually be safer precisely because it won’t try to change pre-Singularity events, and if it’s first the new decision theory will be in place in time for any post-Singularity events.
That’s surprising to me—what should I read in order to understand this point better? EDIT: strike that, you answer that above.
They could modify themselves so that if they ever encounter a CDT-descended AI they’ll start a war (even if it means mutual destruction) unless the CDT-descended AI gives them 99% of its resources.
They could also modify themselves to make the analogous threat if they encounter a UDT-descended AI, or a descendant of an AI designed by TIm Freeman, or a descendant of an AI designed by Wei Dai, or a descendant of an AI designed using ideas mentioned on LessWrong. I would hope that any of those AI’s would hand over 99% of their resources if the extortionist could prove its source code and prove that war would be worse. I assume you’re saying that CDT is special in this regard. How is it special?
(Thanks for the pointer to the James Joyce book, I’ll have a look at it.)
If the alien AI computes the expected utility of “provably modify myself to start a war against CDT-AI unless it gives me 99% of its resources”, it’s certain to get a high value, whereas if it computes the expected utility of “provably modify myself to start a war against UDT-AI unless it gives me 99% of its resources” it might possibly get a low value (not sure because UDT isn’t fully specified), because the UDT-AI, when choosing what to do when faced with this kind of threat, would take into account the logical correlation between its decision and the alien AI’s prediction of its decision.
Well, that’s plausible. I’ll have to work through some UDT examples to understand fully.
What model do you have of how entity X can prove to entity Y that X is running specific source code?
The proof that I can imagine is entity Y gives some secure hardware Z to X, and then X allows Z to observe the process of X self-modifying to run the specified source code, and then X gives the secure hardware back to Y. Both X and Y can observe the creation of Z, so Y can know that it’s secure and X can know that it’s a passive observer rather than a bomb or something.
This model breaks the scenario, since a CDT playing the role of Y could self-modify any time before it hands over Z and play the game competently.
Now, if there’s some way for X to create proofs of X’s source code that will be convincing to Y without giving advance notice to Y, I can imagine a problem for Y here. Does anyone know how to do that?
(I acknowledge that if nobody knows how to do that, that means we don’t know how to do that, not that it can’t be done.)
Hmm, this explains my aversion to knowing the details of what other people are thinking. It can put me at a disadvantage in negotiations unless I am able to lie convincingly and say I do not know.
I think I″ll stop here for now, because you already seem intrigued enough to want to learn about UDT in detail. I’m guessing that once you do, you won’t be so motivated to think up reasons why CDT isn’t really so bad. :) Let me know if that turns out not to be the case though.
On second thought, I should answer this question because it’s of independent interest. If Y is sufficiently powerful, it may be able to deduce the laws of physics and the initial conditions of the universe, and then obtain X’s source code by simulating the universe up to when X is created. Note that Y may do this not because it wants to know X’s source code in some anthropomorphic sense, but simply due to how its decision-making algorithm works.
Unless there have been some specific assumptions made about the universe that will not work. Simulating the entire universe does not tell Y which part of the universe it inhabits. It will give Y a set of possible parts of the universe which match Y’s observations. While the simulation strategy will allow the best possible prediction about what X’s source code is given what Y already knows it does not give evidence to Y that it didn’t already have.
You’re right, the model assumes that we live in a universe such that superintelligent AIs would “naturally” have enough evidence to infer the source code of other AIs. (That seems quite plausible, although by no means certain, to me.) Also, since this is a thread about the relative merits of CDT, I should point out that there are some games in which CDT seems to win relative to TDT or UDT, which is a puzzle that is still open.
It’s an interesting problem, but my impression when reading was somewhat similar to that of Eliezer in the replies. At the core it is the question of “How do you deal with constructs made by other agents?” I don’t think TDT has any particular weakness there.
Quantum mechanics seems to be pretty clear that true random number generators are available, and probably happen naturally. I don’t understand why you consider that scenario probable enough to be worth talking about.
Do you have an intuition as to how it would do this without contradicting itself? I tried to ask a similar question but got it wrong in the first draft and afaict did not receive an answer to the relevant part.
I just want to know if my own intuition fails in the obvious way.
Really? That’s surprising. My assumption had been that CDT would be much simpler to implement—but just give undesirable outcomes in whole classes of circumstance.
CDT uses a “causal probability function” to evaluate the expected utilities of various choices, where this causal probability function is different from the epistemic probability function you use to update beliefs. (In EDT they are one and the same.) There is no agreement amongst CDT theorists how to formulate this function, and I’m not aware of any specific proposal that can be straightforwardly implemented. For more details see James Joyce’s The foundations of causal decision theory.
I understand AIXI reasonably well and had assumed it was a specific implementation of CDT, perhaps with some tweaks so the reward values are generated internally instead of being observed in the environment. Perhaps AIXI isn’t close to an implementation of CDT, perhaps it’s perceived as not specific or straightforward enough, or perhaps it’s not counted as an implementation. Why isn’t AIXI a counterexample?
You may be right that AIXI can be thought of as an instance of CDT. Hutter himself cites “sequential decision theory” from a 1957 paper which certainly predates CDT, but CDT is general enough that SDT could probably fit into its formalism. (Like EDT can be considered an instance of CDT with the causal probability function set to be the same as the epistemic probability function.) I guess I hadn’t considered AIXI as a serious candidate due to its other major problems.
Four problems are listed there.
The first one is the claim that AIXI wouldn’t have a proper understanding of its body because its thoughts are defined mathematically. This is just wrong, IMO; my refutation, for a machine that’s similar enough to AIXI for this issue to work the same, is here. Nobody has engaged me in serious conversation about that, so I don’t know how well it will stand up. (If I’m right on this, then I’ve seen Eliezer, Tim Tyler, and you make the same error. What other false consensuses do we have?)
The second one is fixed if we do the tweak I mentioned in the grandparent of this comment.
If you take the fix described above for the second one, what’s left of the third one is the claim that instantaneous human (or AI) experience is too nuanced to fit in a single cell of a Turing machine. According to the original paper, page 8, the symbols on the reward tape are drawn from an alphabet R of arbitrary but fixed size. All you need is a very large alphabet and this one goes away.
I agree with the facts asserted in Tyler’s fourth problem, but I do not agree that it is a problem. He’s saying that Kolmogorov complexity is ill-defined because the programming language used is undefined. I agree that rational agents might disagree on priors because they’re using different programming languages to represent their explanations. In general, a problem may have multiple solutions. Practical solutions to the problems we’re faced with will require making indefensible arbitrary choices of one potential solution over another. Picking the programming language for priors is going to be one of those choices.
I don’t see how your refutation applies to AIXI. Let me just try to explain in detail why I think AIXI will not properly protect its body. Consider an AIXI that arises in a simple universe, i.e., one computed by a short program P. AIXI has a probability distribution not over universes, but instead over environments where an environment is a TM whose output tape is AIXI’s input tape and whose input tape is AIXI’s output tape. What’s the simplest environment that fits AIXI’s past inputs/outputs? Presumably it’s E = P plus some additional pieces of code that injects E’s inputs into where AIXI’s physical output ports are located in the universe (that is, overrides the universe’s natural evolution using E’s inputs), and extracts E’s outputs from where AIXI’s physical input ports are located.
What happens when AIXI considers an action that destroys its physical body in the universe computed by P? As long as the input/output ports are not also destroyed, AIXI would expect that the environment E (with its “supernatural” injection/extraction code) will continue to receive its outputs and provide it with inputs.
Does that make sense?
(Responding out of order)
Yes, but it makes some unreasonable assumptions.
An implementation of AIXI would be fairly complex. If P is too simple, then AIXI could not really have a body in the universe, so it would be correct in guessing that some irregularity in the laws of physics was causing its behaviors to be spliced into the behavior of the world.
However, if AIXI has observed enough of the inner workings of other similar machines, or enough of the laws of physics in general, or enough of its own inner workings, the simplest model will be that AIXI’s outputs really do emerge from the laws of physics in the real universe, since we are assuming that that is indeed the case and that Kolmogorov induction eventually works. At that point, imagining that AIXI’s behaviors are a consequence of a bunch of exceptions to the laws of physics is just extra complexity and won’t be part of the simplest hypothesis. It will be part of some less likely hypotheses, and the AI would have to take that risk into account when deciding whether to self-improve.
Tim, I think you’re probably not getting my point about the distinction between our concept of a computable universe, and AIXI’s formal concept of a computable environment. AIXI requires that the environment be a TM whose inputs match AIXI’s past outputs and whose outputs match AIXI’s past inputs. A candidate environment must have the additional code to inject/extract those inputs/outputs and place them on the input/output tapes, or AIXI will exclude it from its expected utility calculations.
I agree that the candidate environment will need to have code to handle the inputs. However, if the candidate environment can compute the outputs on its own, without needing to be given the AI’s outputs, the candidate environment does not need code to inject the AI’s outputs into it.
Even if the AI can only partially predict its own behavior based on the behavior of the hardware it observes in the world, it can use that information to more efficiently encode its outputs in the candidate environment, so it can have some understanding of its position in the world even without being able to perfectly predict its own behavior from first principles.
If the AI manages to destroy itself, it will expect its outputs to be disconnected from the world and have no consequences, since anything else would violate its expectations about the laws of physics.
This back-and-forth appears to be useless. I should probably do some Python experiments and we then can change this from a debate to a programming problem, which would be much more pleasant.
If a candidate environment has no special code to inject AIXI’s outputs, then when AIXI computes expected utilities, it will find that all actions have equal utility in that environment, so that environment will play no role in its decisions.
Ok, but try not to destroy the world while you’re at it. :) Also, please take a closer look at UDT first. Again, I think there’s a strong possibility that you’ll end up thinking “why did I waste my time defending CDT/AIXI?”
FYI, generating reward values internally—instead of them being observed in the environment—makes no difference whatsoever to the wirehead problem.
AIXI digging into its brains with its own mining claws is quite plausible. It won’t reason as you suggest—since it has no idea that it is instantiated in the real world. So, its exploratory mining claws may plunge in. Hopefully it will get suitably negatively reinforced for that—though much will depend on which part of its brain it causes damage too. It could find that ripping out its own inhibition circuits is very rewarding.
A larger set of symbols for rewards makes no difference—since the reward signal is a scalar. If you compare with an animal, that has millions of pain sensors that operate in parallel. The animal is onto something there—something to do with a-priori knowledge about the common causes of pain. Having lots of pain sensors has positive aspects—e.g. it saves you experimenting to figure out what hurts.
As for the reference machine issue, I do say: “This problem is also not very serious.”
Not very serious unless you are making claims about your agent being “the most intelligent unbiased agent possible”. Then this kind of thing starts to make a difference...
You can encode 16 64 bit integers in a 1024 bit integer. The scalar/parallel distinction is bogus.
(Edit: I original wrote “5 32 bit integers” when I meant “2**5 32 bit integers”. Changed to “16 64 bit integers” because “32 32 bit integers” looked too much like a typo.)
Strawman argument. The only claim made is that it’s the most intelligent up to a constant factor, and a bunch of other conditions are thrown in. When Hutter’s involved, you can bet that some of the constant factors are large compared to the size of the universe.
Er, not if you are adding the rewards together and maximising the results, you can’t! That is exactly what happens to the rewards used by AIXI.
Actually Hutter says this sort of thing all over the place (I was quoting him above) - and it seems pretty irritating and misleading to me. I’m not saying the claims he makes in the fine print are wrong, but rather that the marketing headlines are misleading.
You’re right there, I’m confusing AIXI with another design I’ve been working with in a similar idiom. For AIXI to work, you have to combine together all the environmental stuff and compute a utility, make the code for doing the combining part of the environment (not the AI), and then use that resulting utility as the input to AIXI.
Thankyou for the reference, and the explanation.
I am prompted to ask myself a question analogous to the one Eliezer recently asked:
Is it worth my while exploring the details of CDT formalization beyond just the page you linked to? There seems to be some advantage to understanding the details and conventions of how such concepts are described. At the same time revising CDT thinking in too much detail may eliminate some entirely justifiable confusion as to why anyone would think it is a good idea! “Causal Expected Utiluty”? “Causal Tendencies”? What the? I only care about what will get me the best outcome!
Probably not. I only learned it by accident myself. I had come up with a proto-UDT that was motivated purely by anthropic reasoning paradoxes (as opposed to Newcomb-type problems like CDT and TDT), and wanted to learn how existing decision theories were formalized so I could do something similar. James Joyce’s book was the most prominent such book available at the time.
ETA: Sorry, I think the above is probably not entirely clear or helpful. It’s a bit hard for me to put myself in your position and try to figure out what may or may not be worthwhile for you. The fact is that Joyce’s book is the decision theory book I read, and quite possibly it influenced me more than I realize, or is more useful for understanding the motivation for or the formulation of UDT than I think. It couldn’t hurt to grab a copy of it and read a few chapters to see how useful it is to you.
Thanks for the edit/update. For reference it may be worthwhile to make such additions as a new comment, either as a reply to yourself or the parent. It was only by chance that I spotted the new part!
What pre-singularity actions are you worried about them taking?
What I was thinking was that a CDT-seeded AI might actually be safer precisely because it won’t try to change pre-Singularity events, and if it’s first the new decision theory will be in place in time for any post-Singularity events.
That’s surprising to me—what should I read in order to understand this point better?