So… I’m sorry if this reply seems a little unhelpful, and I wish there was some way to engage more strongly, but...
Point (1) is the main problem. AIXI updates freely over a gigantic range of sensory predictors with no specified ontology—it’s a sum over a huge set of programs, and we, the users, have no idea what the representations are talking about, except that at the end of their computations they predict, “You will see a sensory 1 (or a sensory 0).” (In my preferred formalism, the program puts a probability on a 0 instead.) Inside, the program could’ve been modeling the universe in terms of atoms, quarks, quantum fields, cellular automata, giant moving paperclips, slave agents scurrying around… we, the programmers, have no idea how AIXI is modeling the world and producing its predictions, and indeed, the final prediction could be a sum over many different representations.
This means that equation (20) in Hutter is written as a utility function over sense data, where the reward channel is just a special case of sense data. We can easily adapt this equation to talk about any function computed directly over sense data—we can get AIXI to optimize any aspect of its sense data that we please. We can’t get it to optimize a quality of the external universe. One of the challenges I listed in my FAI Open Problems talk, and one of the problems I intend to talk about in my FAI Open Problems sequence, is to take the first nontrivial steps toward adapting this formalism—to e.g. take an equivalent of AIXI in a really simple universe, with a really simple goal, something along the lines of a Life universe and a goal of making gliders, and specify something given unlimited computing power which would behave like it had that goal, without pre-fixing the ontology of the causal representation to that of the real universe, i.e., you want something that can range freely over ontologies in its predictive algorithms, but which still behaves like it’s maximizing an outside thing like gliders instead of a sensory channel like the reward channel. This is an unsolved problem!
We haven’t even got to the part where it’s difficult to say in formal terms how to interpret what a human says s/he wants the AI to plan, and where failures of phrasing of that utility function can also cause a superhuman intelligence to kill you. We haven’t even got to the huge buried FAI problem inside the word “optimal” in point (1), which is the really difficult part in the whole thing. Because so far we’re dealing with a formalism that can’t even represent a purpose of the type you’re looking for—it can only optimize over sense data, and this is not a coincidental fact, but rather a deep problem which the AIXI formalism deliberately avoided.
(2) sounds like you think an AI with an alien, superhuman planning algorithm can tell humans what to do without ever thinking consequentialistically about which different statements will result in human understanding or misunderstanding. Anna says that I need to work harder on not assuming other people are thinking silly things, but even so, when I look at this, it’s hard not to imagine that you’re modeling AIXI as a sort of spirit containing thoughts, whose thoughts could be exposed to the outside with a simple exposure-function. It’s not unthinkable that a non-self-modifying superhuman planning Oracle could be developed with the further constraint that its thoughts are human-interpretable, or can be translated for human use without any algorithms that reason internally about what humans understand, but this would at the least be hard. And with AIXI it would be impossible, because AIXI’s model of the world ranges over literally all possible ontologies and representations, and its plans are naked motor outputs.
Similar remarks apply to interpreting and answering “What will be its effect on _?” It turns out that getting an AI to understand human language is a very hard problem, and it may very well be that even though talking doesn’t feel like having a utility function, our brains are using consequential reasoning to do it. Certainly, when I write language, that feels like I’m being deliberate. It’s also worth noting that “What is the effect on X?” really means “What are the effects I care about on X?” and that there’s a large understanding-the-human’s-utility-function problem here. In particular, you don’t want your language for describing “effects” to partition, as the same state of described affairs, any two states which humans assign widely different utilities. Let’s say there are two plans for getting my grandmother out of a burning house, one of which destroys her music collection, one of which leaves it intact. Does the AI know that music is valuable? If not, will it not describe music-destruction as an “effect” of a plan which offers to free up large amounts of computer storage by, as it turns out, overwriting everyone’s music collection? If you then say that the AI should describe changes to files in general, well, should it also talk about changes to its own internal files? Every action comes with a huge number of consequences—if we hear about all of them (reality described on a level so granular that it automatically captures all utility shifts, as well as a huge number of other unimportant things) then we’ll be there forever.
I wish I had something more cooperative to say in reply—it feels like I’m committing some variant of logical rudeness by this reply—but the truth is, it seems to me that AIXI isn’t a good basis for the agent you want to describe; and I don’t know how to describe it formally myself, either.
Thanks for the response. To clarify, I’m not trying to point to the AIXI framework as a promising path; I’m trying to take advantage of the unusually high degree of formalization here in order to gain clarity on the feasibility and potential danger points of the “tool AI” approach.
It sounds to me like your two major issues with the framework I presented are (to summarize):
(1) There is a sense in which AIXI predictions must be reducible to predictions about the limited set of inputs it can “observe directly” (what you call its “sense data”).
(2) Computers model the world in ways that can be unrecognizable to humans; it may be difficult to create interfaces that allow humans to understand the implicit assumptions and predictions in their models.
I don’t claim that these problems are trivial to deal with. And stated as you state them, they sound abstractly very difficult to deal with. However, it seems true—and worth noting—that “normal” software development has repeatedly dealt with them successfully. For example: Google Maps works with a limited set of inputs; Google Maps does not “think” like I do and I would not be able to look at a dump of its calculations and have any real sense for what it is doing; yet Google Maps does make intelligent predictions about the external universe (e.g., “following direction set X will get you from point A to point B in reasonable time”), and it also provides an interface (the “route map”) that helps me understand its predictions and the implicit reasoning (e.g. “how, why, and with what other consequences direction set X will get me from point A to point B”).
Difficult though it may be to overcome these challenges, my impression is that software developers have consistently—and successfully—chosen to take them on, building algorithms that can be “understood” via interfaces and iterated over—rather than trying to prove the safety and usefulness of their algorithms with pure theory before ever running them. Not only does the former method seem “safer” (in the sense that it is less likely to lead to putting software in production before its safety and usefulness has been established) but it seems a faster path to development as well.
It seems that you see a fundamental disconnect between how software development has traditionally worked and how it will have to work in order to result in AGI. But I don’t understand your view of this disconnect well enough to see why it would lead to a discontinuation of the phenomenon I describe above. In short, traditional software development seems to have an easier (and faster and safer) time overcoming the challenges of the “tool” framework than overcoming the challenges of up-front theoretical proofs of safety/usefulness; why should we expect this to reverse in the case of AGI?
So first a quick note: I wasn’t trying to say that the difficulties of AIXI are universal and everything goes analogously to AIXI, I was just stating why AIXI couldn’t represent the suggestion you were trying to make. The general lesson to be learned is not that everything else works like AIXI, but that you need to look a lot harder at an equation before thinking that it does what you want.
On a procedural level, I worry a bit that the discussion is trying to proceed by analogy to Google Maps. Let it first be noted that Google Maps simply is not playing in the same league as, say, the human brain, in terms of complexity; and that if we were to look at the winning “algorithm” of the million-dollar Netflix Prize competition, which was in fact a blend of 107 different algorithms, you would have a considerably harder time figuring out why it claimed anything it claimed.
But to return to the meta-point, I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future—to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.
I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera.
My own FAI development plans call for trying to maintain programmer-understandability of some parts of the AI during development. I expect this to be a huge headache, possibly 30% of total headache, possibly the critical point on which my plans fail, because it doesn’t happen naturally. Go look at the source code of the human brain and try to figure out what a gene does. Go ask the Netflix Prize winner for a movie recommendation and try to figure out “why” it thinks you’ll like watching it. Go train a neural network and then ask why it classified something as positive or negative. Try to keep track of all the memory allocations inside your operating system—that part is humanly understandable, but it flies past so fast you can only monitor a tiny fraction of what goes on, and if you want to look at just the most “significant” parts, you would need an automated algorithm to tell you what’s significant. Most AI algorithms are not humanly understandable. Part of Bayesianism’s appeal in AI is that Bayesian programs tend to be more understandable than non-Bayesian AI algorithms. I have hopeful plans to try and constrain early FAI content to humanly comprehensible ontologies, prefer algorithms with humanly comprehensible reasons-for-outputs, carefully weigh up which parts of the AI can safely be less comprehensible, monitor significant events, slow down the AI so that this monitoring can occur, and so on. That’s all Friendly AI stuff, and I’m talking about it because I’m an FAI guy. I don’t think I’ve ever heard any other AGI project express such plans; and in mainstream AI, human-comprehensibility is considered a nice feature, but rarely a necessary one.
It should finally be noted that AI famously does not result from generalizing normal software development. If you start with a map-route program and then try to program it to plan more and more things until it becomes an AI… you’re doomed, and all the experienced people know you’re doomed. I think there’s an entry or two in the old Jargon File aka Hacker’s Dictionary to this effect. There’s a qualitative jump to writing a different sort of software—from normal programming where you create a program conjugate to the problem you’re trying to solve, to AI where you try to solve cognitive-science problems so the AI can solve the object-level problem. I’ve personally met a programmer or two who’ve generalized their code in interesting ways, and who feel like they ought to be able to generalize it even further until it becomes intelligent. This is a famous illusion among aspiring young brilliant hackers who haven’t studied AI. Machine learning is a separate discipline and involves algorithms and problems that look quite different from “normal” programming.
Thanks for the response. My thoughts at this point are that
We seem to have differing views of how to best do what you call “reference class tennis” and how useful it can be. I’ll probably be writing about my views more in the future.
I find it plausible that AGI will have to follow a substantially different approach from “normal” software. But I’m not clear on the specifics of what SI believes those differences will be and why they point to the “proving safety/usefulness before running” approach over the “tool” approach.
We seem to have differing views of how frequently today’s software can be made comprehensible via interfaces. For example, my intuition is that the people who worked on the Netflix Prize algorithm had good interfaces for understanding “why” it recommends what it does, and used these to refine it. I may further investigate this matter (casually, not as a high priority); on SI’s end, it might be helpful (from my perspective) to provide detailed examples of existing algorithms for which the “tool” approach to development didn’t work and something closer to “proving safety/usefulness up front” was necessary.
Canonical software development examples emphasizing “proving safety/usefulness before running” over the “tool” software development approach are cryptographic libraries and NASA space shuttle navigation.
At the time of writing this comment, there was recent furor over software called CryptoCat that didn’t provide enough warnings that it was not properly vetted by cryptographers and thus should have been assumed to be inherently insecure. Conventional wisdom and repeated warnings from the security community state that cryptography is extremely difficult to do properly and attempting to create your own may result in catastrophic results. A similar thought and development process goes into space shuttle code.
It seems that the FAI approach to “proving safety/usefulness” is more similar to the way cryptographic algorithms are developed than the (seemingly) much faster “tool” approach, which is more akin to web development where the stakes aren’t quite as high.
EDIT: I believe the “prove” approach still allows one to run snippets of code in isolation, but tends to shy away from running everything end-to-end until significant effort has gone into individual component testing.
The analogy with cryptography is an interesting one, because...
In cryptography, even after you’ve proven that a given encryption scheme is secure, and that proof has been centuply (100 times) checked by different researchers at different institutions, it might still end up being insecure, for many reasons.
Examples of reasons include:
The proof assumed mathematical integers/reals, of which computer integers/floating point numbers are just an approximation.
The proof assumed that the hardware the algorithm would be running on was reliable (e.g. a reliable source of randomness).
The proof assumed operations were mathematical abstractions and thus exist out of time, and thus neglected side channel attacks which measures how long a physical real world CPU took to execute a the algorithm in order to make inferences as to what the algorithm did (and thus recover the private keys).
The proof assumed the machine executing the algorithm was idealized in various ways, when in fact a CPU emits heat other electromagnetic waves, which can be detected and from which inferences can be drawn, etc.
I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera.
That’s one way to “win” a game of reference class tennis. Declare unilaterally that what you are discussing falls into the reference class “things that are most effectively reasoned about by discussing low level details and abandoning or ignoring all observed evidence about how things with various kinds of similarity have worked in the past”. Sure, it may lead to terrible predictions sometimes but by golly, it means you can score an ‘ace’ in the reference class tennis while pretending you are not even playing!
And atheism is a religion, and bald is a hair color.
The three distinguishing characteristics of “reference class tennis” are (1) that there are many possible reference classes you could pick and everyone engaging in the tennis game has their own favorite which is different from everyone else’s; (2) that the actual thing is obviously more dissimilar to all the cited previous elements of the so-called reference class than all those elements are similar to each other (if they even form a natural category at all rather than having being picked out retrospectively based on similarity of outcome to the preferred conclusion); and (3) that the citer of the reference class says it with a cognitive-traffic-signal quality which attempts to shut down any attempt to counterargue the analogy because “it always happens like that” or because we have so many alleged “examples” of the “same outcome” occurring (for Hansonian rationalists this is accompanied by a claim that what you are doing is the “outside view” (see point 2 and 1 for why it’s not) and that it would be bad rationality to think about the “individual details”).
I have also termed this Argument by Greek Analogy after Socrates’s attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.
I have also termed this Argument by Greek Analogy after Socrates’s attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.
For the curious, this is from the Phaedo pages 70-72. The run of the argument are basically thus:
P1 Natural changes are changes from and to opposites, like hot from relatively cold, etc.
P2 Since every change is between opposites A and B, there are two logically possible processes of change, namely A to B and B to A.
P3 If only one of the two processes were physically possible, then we should expect to see only one of the two opposites in nature, since the other will have passed away irretrievably.
P4 Life and death are opposites.
P5 We have experience of the process of death.
P6 We have experience of things which are alive
C From P3, 4, 5, and 6 there is a physically possible, and actual, process of going from death to life.
The argument doesn’t itself prove (haha) the immortality of the soul, only that living things come from dead things. The argument is made in support of the claim, made prior to this argument, that if living people come from dead people, then dead people must exist somewhere. The argument is particularly interesting for premises 1 and 2, which are hard to deny, and 3, which seems fallacious but for non-obvious reasons.
This sounds like it might be a bit of a reverent-Western-scholar steelman such as might be taught in modern philosophy classes; Plato’s original argument for the immortality of the soul sounded more like this, which is why I use it as an early exemplar of reference class tennis:
-
Then let us consider the whole question, not in relation to man only, but in relation to animals generally, and to plants, and to everything of which there is generation, and the proof will be easier. Are not all things which have opposites generated out of their opposites? I mean such things as good and evil, just and unjust—and there are innumerable other opposites which are generated out of opposites. And I want to show that in all opposites there is of necessity a similar alternation; I mean to say, for example, that anything which becomes greater must become greater after being less.
True.
And that which becomes less must have been once greater and then have become less.
Yes.
And the weaker is generated from the stronger, and the swifter from the slower.
Very true.
And the worse is from the better, and the more just is from the more unjust.
Of course.
And is this true of all opposites? and are we convinced that all of them are generated out of opposites?
Yes.
And in this universal opposition of all things, are there not also two intermediate processes which are ever going on, from one to the other opposite, and back again; where there is a greater and a less there is also an intermediate process of increase and diminution, and that which grows is said to wax, and that which decays to wane?
Yes, he said.
And there are many other processes, such as division and composition, cooling and heating, which equally involve a passage into and out of one another. And this necessarily holds of all opposites, even though not always expressed in words—they are really generated out of one another, and there is a passing or process from one to the other of them?
Very true, he replied.
Well, and is there not an opposite of life, as sleep is the opposite of waking?
True, he said.
And what is it?
Death, he answered.
And these, if they are opposites, are generated the one from the other, and have there their two intermediate processes also?
Of course.
Now, said Socrates, I will analyze one of the two pairs of opposites which I have mentioned to you, and also its intermediate processes, and you shall analyze the other to me. One of them I term sleep, the other waking. The state of sleep is opposed to the state of waking, and out of sleeping waking is generated, and out of waking, sleeping; and the process of generation is in the one case falling asleep, and in the other waking up. Do you agree?
I entirely agree.
Then, suppose that you analyze life and death to me in the same manner. Is not death opposed to life?
Yes.
And they are generated one from the other?
Yes.
What is generated from the living?
The dead.
And what from the dead?
I can only say in answer—the living.
Then the living, whether things or persons, Cebes, are generated from the dead?
That is clear, he replied.
Then the inference is that our souls exist in the world below?
This sounds like it might be a bit of a reverent-Western-scholar steelman such as might be taught in modern philosophy classes
That was roughly my aim, but I don’t think I inserted any premises that weren’t there. Did you have a complaint about the accuracy of my paraphrase? The really implausible premise there, namely that death is the opposite of life, is preserved I think.
As for reverence, why not? He was, after all, the very first person in our historical record to suggest that thinking better might make you happier. He was also an intellectualist about morality, at least sometimes a hedonic utilitarian, and held no great respect for logic. And he was a skilled myth-maker. He sounds like a man after your own heart, actually.
Esar’s summary doesn’t seem to be different from this, other than 1) adding the useful bit about “passed away irretrievably” and 2) yours makes it clear that the logical jump happens right at the end.
I’m actually not sure now why you consider this like “reference class tennis”. The argument looks fine, except for the part where “souls exist in the world below” jumps in as a conclusion, not having been mentioned earlier in the argument.
The ‘souls exist in the world below’ bit is directly before what Eliezer quoted:
Suppose we consider the question whether the souls of men after death are or are not in the world below. There comes into my mind an ancient doctrine which affirms that they go from hence into the other world, and returning hither, are born again from the dead. Now if it be true that the living come from the dead, then our souls must exist in the other world, for if not, how could they have been born again? And this would be conclusive, if there were any real evidence that the living are only born from the dead; but if this is not so, then other arguments will have to be adduced.
Very true, replied Cebes.
Then let us consider the whole question...
But you’re right that nothing in the argument defends the idea of a world below, just that souls must exist in some way between bodies.
just that souls must exist in some way between bodies.
Not even that, at least in the part of the argument I’ve seen (paraphrased?) above.
He just mentions an ancient doctrine, and then claims that souls must exist somewhere while they’re not embodied, because he can’t imagine where they would come from otherwise. I’m not even sure if the ancient doctrine is meant as argument from authority or is just some sort of Chewbacca defense.
(He doesn’t seem to explicitly claim the “ancient doctrine” to be true or plausible, just that it came to his mind. It feels like I’ve lost something in the translation.)
(2) that the actual thing is obviously more dissimilar to all the cited previous elements of the so-called reference class than all those elements are similar to each other (if they even form a natural category at all rather than having being picked out retrospectively based on similarity of outcome to the preferred conclusion);
Ok, it seems like under this definition of “reference class tennis” (particularly parts (2) and (3)) the participants must be wrong and behaving irrationality about it in order to be playing reference class tennis. So when they are either right or at least applying “outside view” considerations correctly, given all the information available to them they aren’t actually playing “reference class tennis” but instead doing whatever it is that reasoning (boundedly) correctly using reference to actual relevant evidence about related occurrences is called when it isn’t packaged with irrational wrongness.
With this definition in mind it is necessary to translate replies such as those here by Holden:
We seem to have differing views of how to best do what you call “reference class tennis” and how useful it can be. I’ll probably be writing about my views more in the future.
Holden’s meaning is, of course, not that that he argues is actually a good thing but rather declaring that the label doesn’t apply to what he is doing. He is instead doing that other thing that is actually sound thinking and thinks people are correct to do so.
Come to think of it if most people in Holden’s shoes heard Eliezer accuse them of “reference class tennis” and actually knew that he intended it with the meaning he explicitly defines here rather than the one they infer from context they would probably just consider him arrogant, rude and mind killed then write him and his organisation off as not worth engaging with.
In the vast majority of cases where I have previously seen Eliezer argue against people using “outside view” I have agreed with Eliezer, and have grown rather fond of using the phrase “reference class tennis” as a reply myself where appropriate. But seeing how far Eliezer has taken the anti-outside-view position here and the extent to which “reference class tennis” is defined as purely an anti-outside-view semantic stop sign I’ll be far more hesitant to make us of it myself.
It is tempting to observe “Eliezer is almost always right when he argues against ‘outside view’ applications, and the other people are all confused. He is currently arguing against ‘outside view’ applications. Therefore, the other people are probably confused.” To that I reply either “Reference class tennis!” or “F*$% you, I’m right and you’re wrong!” (I’m honestly not sure which is the least offensive.)
Which of 1, 2 and 3 do you disagree with in this case?
Edit: I mean, I’m sorry to parody but I don’t really want to carefully rehash the entire thing, so, from my perspective, Holden just said, “But surely strong AI will fall into the reference class of technology used to give users advice, just like Google Maps doesn’t drive your car; this is where all technology tends to go, so I’m really skeptical about discussing any other possibility.” Only Holden has argued to SI that strong AI falls into this particular reference class so far as I can recall, with many other people having their own favored reference classes e.g. Hanson et. al as cited above; a strong AI is far more internally dissimilar from Google Maps and Yelp than Google Maps and Yelp are internally similar to each other, plus there are many many other software programs that don’t provide advice at all so arguably the whole class may be chosen-post-facto; and I’d have to look up Holden’s exact words and replies to e.g. Jaan Tallinn to decide to what degree, if any, he used the analogy to foreclose other possibilities conversationally without further debate, but I do think it happened a little, but less so and less explicitly than in my Robin Hanson debate. If you don’t think I should at this point diverge into explaining the concept of “reference class tennis”, how should the conversation proceed further?
Also, further opinions desired on whether I was being rude, whether logically rude or otherwise.
Viewed charitably, you were not being rude, although you did veer away from your main point in ways likely to be unproductive. (For example, being unnecessarily dismissive towards Hanson, who you’d previously stated had given arguments roughly as good as Holden’s; or spending so much of your final paragraph emphasizing Holden’s lack of knowledge regarding AI.)
On the most likely viewing, it looks like you thought Holden was probably playing reference class tennis. This would have been rude, because it would imply that you thought the following inaccurate things about him:
He was “taking his reference class and going home”
That you can’t “have a back-and-forth conversation” with him
I don’t think that you intended those implications. All the same, your final comment came across as noticeably less well-written than your post.
I’m confused how you thought “reference class tennis” was anything but a slur on the other side’s argument. Likewise “mindkilled.” Sometimes, slurs about arguments are justified (agnostic in the instant case) - but that’s a separate issue.
Empirically obviously 1 is true, I would argue strongly for 2 but it’s a legitimate point of dispute, and I would say that there were relatively small but still noticeable but quite forgiveable traces of 3.
Then it does seem like your AI arguments are playing reference class tennis with a reference class of “conscious beings”. For me, the force of the Tool AI argument is that there’s no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden’s Tool AI: you’d feed it data, it’d make predictions, you could choose to use the predictions.
(This is, naturally, the view of that school of AI implementers. Scott Brown: “People often
seem to conflate having intelligence with having volition. Intelligence without volition is just information.”)
The best story I’ve read about a not so failed utopia involves this kind of accountability over the FAI. While I hate to generalize from fictional evidence it definitely seems like a necessary step to not becoming a galaxy that tiles over the aliens with happy faces instead of just freezing them in place to prevent human harm.
For example: Google Maps works with a limited set of inputs; Google Maps does not “think” like I do and I would not be able to look at a dump of its calculations and have any real sense for what it is doing; yet Google Maps does make intelligent predictions about the external universe (e.g., “following direction set X will get you from point A to point B in reasonable time”), and it also provides an interface (the “route map”) that helps me understand its predictions and the implicit reasoning (e.g. “how, why, and with what other consequences direction set X will get me from point A to point B”).
Explaining routes is domain specific and quite simple. When you are using domain specific techniques to find solutions to domain specific problems, you can use domain specific interfaces where human programmers and designers do all the heavy lifting to figure out the general strategy of how to communicate to the user.
But if you want a tool AGI that finds solutions in arbitrary domains, you need a cross domain solution for communicating tool AGI’s plans to the user. This is as much a harder problem than showing a route on a map, as cross domain AGI is a harder problem than computing the routes. Instead of the programmer figuring out how to plot road tracing curves on a map, the programmer has to figure out how to get the computer to figure out that displaying a map with route traced over it is a useful thing to do, in a way that generalizes figuring out other useful things to do to communicate answers to other types of questions. And among the hard subproblems of programming computers to find useful things to do in general problems is specifying the meaning of “useful”. If that is done poorly, the tool AGI tries to trick the user into accepting plans that achieve some value negating distortion of what we actually want, instead of giving information that helps provide a good evaluation. Doing this right requires solving the same problems required to do FAI right.
To note something on making AIXI based tool: Instead of calculating rewards sum over the whole future (something that is simultaneously impractical, computationally expensive, and would only serve to impair performance on task at hand), one could use the single-step reward, with 1 for button being pressed any time and 0 for button not being pressed ever. It is still not entirely a tool, but it has very bounded range of unintended behaviour (much harder to speculate of the terminator scenario). In the Hutter’s paper he outlines several not-quite-intelligences before arriving at AIXI.
[edit2: also I do not believe that even with the large sum a really powerful AIXI-tl would be intelligently dangerous rather than simply clever at breaking the hardware that’s computing it. All the valid models in AIXI-tl that affect the choice of actions have to magically insert actions being probed into some kind of internal world model. The hardware that actually makes those actions, complete with sensory apparatus, is incidental; a useless power drain; a needless fire hazard endangering the precious reward pathway]
With regards to utility functions, the utility functions in the AI sense are real valued functions taken over the world model, not functions like number of paperclips in the world. The latter function, unsafe or safe, would be incredibly difficult or impossible to define using conventional methods. It would suffice for accelerating the progress to have an algorithm that can take in an arbitrary function and find it’s maximum; while it would indeed seem to be “very difficult” to use that to cure cancer, it could be plugged into existing models and very quickly be used to e.g. design cellular machinery that would keep repairing the DNA alterations.
Likewise, the speculative tool that can understand phrase ‘how to cure cancer’ and phrase ‘what is the curing time of epoxy’ would have to pick up most narrow least objectionable interpretation of the ‘cure cancer’ phrase to merely answer something more useful than ‘cancer is not a type of epoxy or glue, it does not cure’; it seems that not seeing killing everyone as a valid interpretation comes in as necessary consequence of ability to process language at all.
All the valid models in AIXI-tl that affect the choice of actions have to magically insert actions being probed into some kind of internal world model. The hardware that actually makes those actions, complete with sensory apparatus, is incidental; a useless power drain; a needless fire hazard endangering the precious reward pathway
If the past sensory data include information about the internal workings, then there will be a striking correlation between the outputs that the workings would produce on their own (for physical reasons) and the AI’s outputs. That rules out (or drives down expected utility of acting upon) all but very crazy hypotheses about how the Cartesian interaction works. Wrecking the hardware would break that correlation, and it’s not clear what the crazy hypotheses would say about that, e.g. hypotheses that some simply specified intelligence is stage-managing the inputs, or that sometimes the AIXI-tl’s outputs matter, and other times only the physical hardware matters.
Well, you can’t include entire internal workings in the sensory data, and it can’t model significant portion of itself as it has to try big number of hypotheses on the model on each step, so I would not expect the very crazy hypotheses to be very elaborate and have high coverage of the internals.
If I closed my eyes and did not catch a ball, the explanation is that I did not see it coming and could not catch it, but this sentence is rife with self references of the sort that is problematic for AIXI. The correlation between closed eyes and lack of reward can be coded into some sort of magical craziness, but if I close my eyes and not my ears and hear where the ball lands after I missed catching it, there’s the vastly simpler explanation for why I did not catch it—my hand was not in the right spot (and that works with total absence of sensorium as well). I don’t see how AIXI-tl (with very huge constants) can value it’s eyesight (it might have some value if there is some asymmetric in the long models, but it seems clear it would not assign the adequate, rational value to it’s eyesight). In my opinion there is no single unifying principle to intelligence (or none was ever found), and AIXI-tl (with very huge constants) fails way short of even a cat in many important ways.
edit: Some other thought: I am not sure that Solomonoff induction’s prior is compatible with expected utility maximization. If the expected utility imbalance between crazy models grows faster than 2^length , and I would expect it to grow faster than any computable function (if the utility is unbounded), then the actions will be determined by imbalances between crazy, ultra long models. I would not privilege the belief that it just works without some sort of formal proof or some other very good reason to think it works.
take an equivalent of AIXI in a really simple universe, with a really simple goal, something along the lines of a Life universe and a goal of making gliders, and specify something given unlimited computing power which would behave like it had that goal, without pre-fixing the ontology of the causal representation to that of the real universe, i.e., you want something that can range freely over ontologies in its predictive algorithms, but which still behaves like it’s maximizing an outside thing like gliders instead of a sensory channel like the reward channel.
Your question seems to be about how sentient beings in a Game of Life universe are supposed to define “gliders” to the AI.
1) If they know the true laws of their cellular automaton, they can make a UDT-ish AI that examines statements like “if this logical algorithm has such-and-such output, then my prior over starting configurations of the universe logically implies such-and-such total number of gliders”.
2) If they only know that their universe is some cellular automaton and have a prior over all possible automata, they can similarly say “maximize the number of smallest possible spaceships under the automaton rules” and give the AI some sensory channel wide enough to pin down the specific automaton with high probability.
3) If they only know what sensory experiences correspond to the existence of gliders, but don’t know what gliders are… I guess we have a problem because sensory experiences can be influenced by the AI :-(
Regarding #3: what happens given a directive like “Over there are a bunch of people who report sensory experiences of the kind I’m interested in. Figure out what differentially caused those experiences, and maximize the incidence of that.”?
(I’m not concerned with the specifics of my wording, which undoubtedly contains infinite loopholes; I’m asking about the general strategy of, when all I know is sensory experiences, referring to the differential causes of those experiences, whatever they may be. Which, yes, I would expect to include, in the case where there actually are no gliders and the recurring perception of gliders is the result of a glitch in my perceptual system, modifying my perceptual system to make such glitches more likely… but which I would not expect to include, in the case where my perceptual system is operating essentially the same way when it perceives gliders as when it perceives everything else, modifying my perceptual system to include such glitches (since such a glitch is not the differential cause of experiences of gliders in the first place.))
Let’s say you want the AI to maximize the amount of hydrogen, and you formulate the goal as “maximize the amount of the substance most likely referred to by such-and-such state of mind”, where “referred to” is cashed out however you like. Now imagine that some other substance is 10x cheaper to make than hydrogen. Then the AI could create a bunch of minds in the same state, just enough to re-point the “most likely” pointer to the new substance instead of hydrogen, leading to huge savings overall. Or it could do something even more subversive, my imagination is weak.
That’s what I was getting at, when I said a general problem with using sensory experiences as pointers is that the AI can influence sensory experiences.
Well, right, but my point is that “the thing which differentially caused the sensory experiences to which I refer” does not refer to the same thing as “the thing which would differentially cause similar sensory experiences in the future, after you’ve made your changes,” and it’s possible to specify the former rather than the latter.
The AI can influence sensory experiences, but it can’t retroactively influence sensory experiences. (Or, well, perhaps it can, but that’s a whole new dimension of subversive. Similarly, I suppose a sufficiently powerful optimizer could rewrite the automaton rules in case #2, so perhaps we have a similar problem there as well.)
You need to describe the sensory experience as part of the AI’s utility computation somehow. I thought it would be something like a bitstring representing a brain scan, which can refer to future experiences just as easily as past ones. Do you propose to include a timestamp? But the universe doesn’t seem to have a global clock. Or do you propose to say something like “the values of such-and such terms in the utility computation must be unaffected by the AI’s actions”? But we don’t know how to define “unaffected” mathematically...
I was thinking in terms of referring to a brain. Or, rather, a set of them. But a sufficiently detailed brainscan would work just as well, I suppose.
And, sure, the universe doesn’t have a clock, but a clock isn’t needed, simply an ordering: the AI attends to evidence about sensory experiences that occurred before the AI received the instruction.
Of course, maybe it is incapable of figuring out whether a given sensory experience occurred before it received the instruction… it’s just not smart enough. Or maybe the universe is weirder than I imagine, such that the order in which two events occur is not something the AI and I can actually agree on… which is the same case as “perhaps it can in fact retroactively influence sensory experiences” above.
LearnFun watches a human play an arbitrary NES games. It is hardcoded to assume that as time progresses, the game is moving towards a “better and better” state (i.e. it assumes the player’s trying to win and is at least somewhat effective at achieving its goals). The key point here is that LearnFun does not know ahead of time what the objective of the game is. It infers what the objective of the game is from watching humans play. (More technically, it observes the entire universe, where the entire universe is defined to be the entire RAM content of the NES).
I think there’s some parallels here with your scenario where we don’t want to explicitly tell the AI what our utility function is. Instead, we’re pointing to a state, and we’re saying “This is a good state” (and I guess either we’d explicitly tell the AI “and this other state, it’s a bad state” or we assume the AI can somehow infer bad states to contrast the good states from), and then we ask the AI to come up with a plan (and possibly execute the plan) that would lead to “more good” states.
So what happens? Bit of a spoiler, but sometimes the AI seems to make a pretty good inference for what the utility function a human would probably have had for a given NES game, but sometimes it makes a terrible inference. It never seems to make a “perfect” inference: the even in its best performance, it seems to be optimizing very strange things.
The other part of it is that even if it does have a decent inference for the utility function, it’s not always good at coming up with a plan that will optimize that utility function.
I believe AIXI is much more inspectable than you make it out to be. I think it is important to challenge your claim here because Holden appears to have trusted your expertise and hereby concede an important part of the argument.
AIXI’s utility judgements are based a Solomonoff prior, which are based on the computer programs which return the input data. Computer programs are not black-boxes. A system implementing AIXI can easily also return a sample of typical expected future histories and the programs compressing these histories. By examining these programs, we can figure out what implicit model the AIXI system has of its world. These programs are optimized for shortness so they are likely to be very obfuscated, but I don’t expect them to be incomprehensible (after all, they’re not optimized for incomprehensibility). Even just sampling expected histories without their compressions is likely to be very informative. In the case of AIXItl the situation is better in the sense that it’s output at any give time is guaranteed to be generated by just one length <l subprogram, and this subprogram comes with a proof justifying its utility judgement. It’s also worse in that there is no way to sample its expected future histories. However, I expect the proof provided would implicitly contain such information. If either the programs or the proofs cannot be understood by humans, the programmers can just reject them and look at the next best candidates.
As for “What will be its effect on _?”, this can be answered as well. I already stated that with AIXI you can sample future histories. This is because AIXI has a specific known prior it implements for its future histories, namely Solomonoff induction. This ability may seem limited because it only shows the future sensory data, but sensory data can be whatever you feed AIXI as input. If you want it to a have a realistic model of the world, this includes a lot of relevant information. For example, if you feed it the entire database of Wikipedia, it can give likely future versions of Wikipedia which already provides a lot of details on the effect of its actions.
Can you be a bit more specific in your interpretation of AIXI here?
Here are my assumptions, let me know where you have different assumptions:
Traditional-AIXI is assumed to exists in the same universe as the human who wants to use AIXI to solve some problem.
Traditional-AIXI has a fixed input channel (e.g. it’s connected to a webcam, and/or it receives keyboard signals from the human, etc.)
Traditional-AIXI has a fixed output channel (e.g. it’s connected to a LCD monitor, or it can control a robot servo arm, or whatever).
The human has somehow pre-provided Traditional-AIXI with some utility function.
Traditional-AIXI operates in discrete time steps.
In the first timestep that elapses since Traditional-AIXI is activated, Traditional-AIXI examines the input it receives. It considers all possible programs that take pair (S, A) and emits an output P, where S is the prior state, A is an action to take, and P is the predicted output of taking the action A in state S. Then it discards all programs that would not have produced the input it received, regardless of what S or A it was given. Then it weighs the remaining program according to their Kolmorogov complexity. This is basically the Solomonoff induction step.
Now Traditional-AIXI has to make a decision about an output to generate. It considers all possible outputs it could produce, and feeds it to the programs under consideration, to produce a predicted next time step. Traditional-AIXI then calculates the expected utility of each output (using its pre-programmed utility function), picks the one with the highest utility, and emits that output. Note that it has no idea how any of its outputs would the universe, so this is essentially a uniformly random choice.
In the next timestep, Traditional-AIXI reads its inputs again, but this time taking into account what output it has generated in the previous step. It can now start to model correlation, and eventually causation, between its input and outputs. It has a previous state S and it knows what action A it took in its last step. It can further discard more programs, and narrow the possible models that describes the universe it finds itself in.
How does Tool-AIXI work in contrast to this? Holden seems to want to avoid having any utility function pre-defined at all. However, presumably Tool-AIXI still receives inputs and still produces outputs (probably Holden intends not to allow Tool-AIXI to control a robot servo arm, but he might intend for Tool-AIXI to be able to control an LCD monitor, or at the very least, produce some sort of text file as output).
Does Tool-AIXI proceed in discrete time steps gathering input? Or do we prevent Tool-AIXI from running until a user is ready to submit a curated input to Tool-AIXI? If the latter, how quickly to we expect Tool-AIXI to be able to formulate an reasonable model of our universe?
How does Tool-AIXI choose what output to produce, if there’s no utility function?
If we type in “Tool-AIXI, please give me a cure for cancer” onto a keyboard attached to Tool-AIXI and submit that as an input, do we think that a model that encodes ASCII, the English language, bio-organisms, etc. has a lower kolmogorov complexity than a model that says “we live in a universe where we receive exactly this hardcoded stream of bytes”?
Does Tool-AIXI model the output it produces (whether that be pixels on a screen, or bytes to a file) as an action, or does it somehow prevent itself from modelling its output as if it were an action that had some effect on the universe that it exists in? If the former, then isn’t this just an agenty Oracle AI? If the latter, then what kind of programs is it generate for its model (surely not programs that take (S, A) pairs as inputs, or else what would it use for A when evaluating its plans and predicting the future)?
Regarding the question of formalizing an optimization agent with goals defined in terms of external universe rather than sensory input. It is possible to attack the problem by generalizing the framework I described in http://lesswrong.com/lw/gex/save_the_princess_a_tale_of_aixi_and_utility/8ekk for solving the duality problem. Specifically, consider an “initial guess” stochastic model of the universe including the machine on which our agent is running. I call it the “innate model” M. Now consider a stochastic process with the same degrees of freedom as M but governed by the Solomonoff semi-measure. This is the “unbiased model” S. The two can be combined by assigning transition probabilities proportional to the product of the probabilities assigned by M and S. If M is sufficiently “insecure” (in particular it doesn’t assign 0 to any transition probability) then the resulting model S’, considered as prior, allows arriving at any computable model after sufficient learning. Fix a utility function on the space of histories of our model (note that the histories include both intrinsic and extrinsic degrees of freedom). The intelligence I(A) of any given agent A (= program written in M in the initial state) can now be defined to be the expected utility of A in S’. We can now consider optimal or near-optimal agents in this sense (as opposed to the Legg-Hutter formalism for measuring intelligence, there is no guarantee there is a maximum rather than a supremum; unless of course we limit the length of the programs we consider). This is a generalization of the Legg-Hutter formalism which accounts for limited computational resources, solves the duality problem (such agents take into account possibly wireheading) and also provides a solution for the ontology problem. This is essentially a special case of the Orseau-Ring framework. It is however much more specific than Orseau-Ring where the prior is left completely unspecified. You can think of it as a recipe for constructing Orseau-Ring priors from realistic problems
I realized that although the idea of a deformed Solomonoff semi-measure is correct, the multiplication prescription I suggested is rather ad hoc. The following construction is a much more natural and justifiable way of combining M and S.
Fix t0 a time parameter. Consider a stochastic process S(-t0) that begins at time t = -t0, where t = 0 is the time our agent A “forms”, governed by the Solomonoff semi-measure. Consider another stochastic process M(-t0) that begins from the initial conditions generated by S(-t0) (I’m assuming M only carries information about dynamics and not about initial conditions). Define S’ to be the conditional probability distribution obtained from S by two conditions:
a. S and M coincide on the time interval [-t0, 0]
b. The universe contains A at time t=0
Thus t0 reflects the extent to which we are certain about M: it’s like telling the agent we have been observing behavior M for time period t0.
There is an interesting side effect to this framework, namely that A can exert “acausal” influence on the utility by affecting the initial conditions of the universe (i.e. it selects universes in which A is likely to exist). This might seem like an artifact of the model but I think it might be a legitimate effect: if we believe in one-boxing in Newcomb’s paradox, why shouldn’t we accept such acausal effects?
For models with a concept of space and finite information velocity, like cellular automata, it might make sense to limit the domain of “observed M” in space as well as time, to A’s past “light-cone”
I cannot even slightly visualize what you mean by this. Please explain how it would be used to construct an AI that made glider-oids in a Life-like cellular automaton universe.
Is the AI hardware separate from the cellular automaton or is it a part of it? Assuming the latter, we need to decide which degrees of freedom of the cellular automaton form the program of our AI. For example we can select a finite set of cells and allow setting their values arbitrarily. Then we need to specify our utility function. For example it can be a weighted sum of the number of gliders at different moments of time, or a maximum or whatever. However we need to make sure the expectation values converge. Then the “AI” is simply the assignment of values to the selected cells in the initial state which yields the maximal expect utility. Note though that if we’re sure about the law governing the cellular automaton then there’s no reason to use the Solomonoff semi-measure at all (except maybe as a prior for the initial state outside the selected cells). However if our idea of the way the cellular automaton works is only an “initial guess” then the expectation value is evaluated w.r.t. a stochastic process governed by a “deformed Solomonoff” semi-measure in which transitions illegal w.r.t. assumed cellular automaton law are suppressed by some factor 0 < p < 1 w.r.t. “pure” Solomonoff inference. Note that, contrary to the case of AIXI, I can only describe the measure of intelligence, I cannot constructively describe the agent maximizing this measure. This is unsurprising since building a real (bounded computing resources) AI is a very difficult problem
This means that equation (20) in Hutter is written as a utility function over sense data, where the reward channel is just a special case of sense data. We can easily adapt this equation to talk about any function computed directly over sense data—we can get AIXI to optimize any aspect of its sense data that we please. We can’t get it to optimize a quality of the external universe. One of the challenges I listed in my FAI Open Problems talk, and one of the problems I intend to talk about in my FAI Open Problems sequence, is to take the first nontrivial steps toward adapting this formalism—to e.g. take an equivalent of AIXI in a really simple universe, with a really simple goal, something along the lines of a Life universe and a goal of making gliders, and specify something given unlimited computing power which would behave like it had that goal, without pre-fixing the ontology of the causal representation to that of the real universe, i.e., you want something that can range freely over ontologies in its predictive algorithms, but which still behaves like it’s maximizing an outside thing like gliders instead of a sensory channel like the reward channel. This is an unsolved problem!
It gets more interesting if the computing power is not unlimited but strictly smaller than that of the universe in which the agent is living (excluding the ridiculous ‘run sim since big bang and find yourself in it’ non-solution). Also, it is not only an open problem in the FAI, but also an open problem in the dangerous uFAI.
edit: actually I would search for general impossibility proofs at that point. Also, keep in mind that having ‘all possible models, weighted’ is the ideal Bayesian approach, so it may be the case that simply striving for the most correct way of acting upon uncertainty makes it impossible to care about any real world goals.
Also it is rather interesting how 1 sample into ‘unethical AI design space’ (AIXI) yielded something which, most likely, is fundamentally incapable of caring about a real world goal (but is still an incredibly powerful optimization process if given enough computing power edit: i.e. AIXI doesn’t care if you live or die but in a way quite different from a paperclip maximizer). In so much as one previously had an argument that such is incredibly unlikely, one ought to update and severely lower the probability of correctness of methods employed for generating that argument.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.
Didn’t see this at the time, sorry.
So… I’m sorry if this reply seems a little unhelpful, and I wish there was some way to engage more strongly, but...
Point (1) is the main problem. AIXI updates freely over a gigantic range of sensory predictors with no specified ontology—it’s a sum over a huge set of programs, and we, the users, have no idea what the representations are talking about, except that at the end of their computations they predict, “You will see a sensory 1 (or a sensory 0).” (In my preferred formalism, the program puts a probability on a 0 instead.) Inside, the program could’ve been modeling the universe in terms of atoms, quarks, quantum fields, cellular automata, giant moving paperclips, slave agents scurrying around… we, the programmers, have no idea how AIXI is modeling the world and producing its predictions, and indeed, the final prediction could be a sum over many different representations.
This means that equation (20) in Hutter is written as a utility function over sense data, where the reward channel is just a special case of sense data. We can easily adapt this equation to talk about any function computed directly over sense data—we can get AIXI to optimize any aspect of its sense data that we please. We can’t get it to optimize a quality of the external universe. One of the challenges I listed in my FAI Open Problems talk, and one of the problems I intend to talk about in my FAI Open Problems sequence, is to take the first nontrivial steps toward adapting this formalism—to e.g. take an equivalent of AIXI in a really simple universe, with a really simple goal, something along the lines of a Life universe and a goal of making gliders, and specify something given unlimited computing power which would behave like it had that goal, without pre-fixing the ontology of the causal representation to that of the real universe, i.e., you want something that can range freely over ontologies in its predictive algorithms, but which still behaves like it’s maximizing an outside thing like gliders instead of a sensory channel like the reward channel. This is an unsolved problem!
We haven’t even got to the part where it’s difficult to say in formal terms how to interpret what a human says s/he wants the AI to plan, and where failures of phrasing of that utility function can also cause a superhuman intelligence to kill you. We haven’t even got to the huge buried FAI problem inside the word “optimal” in point (1), which is the really difficult part in the whole thing. Because so far we’re dealing with a formalism that can’t even represent a purpose of the type you’re looking for—it can only optimize over sense data, and this is not a coincidental fact, but rather a deep problem which the AIXI formalism deliberately avoided.
(2) sounds like you think an AI with an alien, superhuman planning algorithm can tell humans what to do without ever thinking consequentialistically about which different statements will result in human understanding or misunderstanding. Anna says that I need to work harder on not assuming other people are thinking silly things, but even so, when I look at this, it’s hard not to imagine that you’re modeling AIXI as a sort of spirit containing thoughts, whose thoughts could be exposed to the outside with a simple exposure-function. It’s not unthinkable that a non-self-modifying superhuman planning Oracle could be developed with the further constraint that its thoughts are human-interpretable, or can be translated for human use without any algorithms that reason internally about what humans understand, but this would at the least be hard. And with AIXI it would be impossible, because AIXI’s model of the world ranges over literally all possible ontologies and representations, and its plans are naked motor outputs.
Similar remarks apply to interpreting and answering “What will be its effect on _?” It turns out that getting an AI to understand human language is a very hard problem, and it may very well be that even though talking doesn’t feel like having a utility function, our brains are using consequential reasoning to do it. Certainly, when I write language, that feels like I’m being deliberate. It’s also worth noting that “What is the effect on X?” really means “What are the effects I care about on X?” and that there’s a large understanding-the-human’s-utility-function problem here. In particular, you don’t want your language for describing “effects” to partition, as the same state of described affairs, any two states which humans assign widely different utilities. Let’s say there are two plans for getting my grandmother out of a burning house, one of which destroys her music collection, one of which leaves it intact. Does the AI know that music is valuable? If not, will it not describe music-destruction as an “effect” of a plan which offers to free up large amounts of computer storage by, as it turns out, overwriting everyone’s music collection? If you then say that the AI should describe changes to files in general, well, should it also talk about changes to its own internal files? Every action comes with a huge number of consequences—if we hear about all of them (reality described on a level so granular that it automatically captures all utility shifts, as well as a huge number of other unimportant things) then we’ll be there forever.
I wish I had something more cooperative to say in reply—it feels like I’m committing some variant of logical rudeness by this reply—but the truth is, it seems to me that AIXI isn’t a good basis for the agent you want to describe; and I don’t know how to describe it formally myself, either.
Thanks for the response. To clarify, I’m not trying to point to the AIXI framework as a promising path; I’m trying to take advantage of the unusually high degree of formalization here in order to gain clarity on the feasibility and potential danger points of the “tool AI” approach.
It sounds to me like your two major issues with the framework I presented are (to summarize):
(1) There is a sense in which AIXI predictions must be reducible to predictions about the limited set of inputs it can “observe directly” (what you call its “sense data”).
(2) Computers model the world in ways that can be unrecognizable to humans; it may be difficult to create interfaces that allow humans to understand the implicit assumptions and predictions in their models.
I don’t claim that these problems are trivial to deal with. And stated as you state them, they sound abstractly very difficult to deal with. However, it seems true—and worth noting—that “normal” software development has repeatedly dealt with them successfully. For example: Google Maps works with a limited set of inputs; Google Maps does not “think” like I do and I would not be able to look at a dump of its calculations and have any real sense for what it is doing; yet Google Maps does make intelligent predictions about the external universe (e.g., “following direction set X will get you from point A to point B in reasonable time”), and it also provides an interface (the “route map”) that helps me understand its predictions and the implicit reasoning (e.g. “how, why, and with what other consequences direction set X will get me from point A to point B”).
Difficult though it may be to overcome these challenges, my impression is that software developers have consistently—and successfully—chosen to take them on, building algorithms that can be “understood” via interfaces and iterated over—rather than trying to prove the safety and usefulness of their algorithms with pure theory before ever running them. Not only does the former method seem “safer” (in the sense that it is less likely to lead to putting software in production before its safety and usefulness has been established) but it seems a faster path to development as well.
It seems that you see a fundamental disconnect between how software development has traditionally worked and how it will have to work in order to result in AGI. But I don’t understand your view of this disconnect well enough to see why it would lead to a discontinuation of the phenomenon I describe above. In short, traditional software development seems to have an easier (and faster and safer) time overcoming the challenges of the “tool” framework than overcoming the challenges of up-front theoretical proofs of safety/usefulness; why should we expect this to reverse in the case of AGI?
So first a quick note: I wasn’t trying to say that the difficulties of AIXI are universal and everything goes analogously to AIXI, I was just stating why AIXI couldn’t represent the suggestion you were trying to make. The general lesson to be learned is not that everything else works like AIXI, but that you need to look a lot harder at an equation before thinking that it does what you want.
On a procedural level, I worry a bit that the discussion is trying to proceed by analogy to Google Maps. Let it first be noted that Google Maps simply is not playing in the same league as, say, the human brain, in terms of complexity; and that if we were to look at the winning “algorithm” of the million-dollar Netflix Prize competition, which was in fact a blend of 107 different algorithms, you would have a considerably harder time figuring out why it claimed anything it claimed.
But to return to the meta-point, I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future—to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.
I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera.
My own FAI development plans call for trying to maintain programmer-understandability of some parts of the AI during development. I expect this to be a huge headache, possibly 30% of total headache, possibly the critical point on which my plans fail, because it doesn’t happen naturally. Go look at the source code of the human brain and try to figure out what a gene does. Go ask the Netflix Prize winner for a movie recommendation and try to figure out “why” it thinks you’ll like watching it. Go train a neural network and then ask why it classified something as positive or negative. Try to keep track of all the memory allocations inside your operating system—that part is humanly understandable, but it flies past so fast you can only monitor a tiny fraction of what goes on, and if you want to look at just the most “significant” parts, you would need an automated algorithm to tell you what’s significant. Most AI algorithms are not humanly understandable. Part of Bayesianism’s appeal in AI is that Bayesian programs tend to be more understandable than non-Bayesian AI algorithms. I have hopeful plans to try and constrain early FAI content to humanly comprehensible ontologies, prefer algorithms with humanly comprehensible reasons-for-outputs, carefully weigh up which parts of the AI can safely be less comprehensible, monitor significant events, slow down the AI so that this monitoring can occur, and so on. That’s all Friendly AI stuff, and I’m talking about it because I’m an FAI guy. I don’t think I’ve ever heard any other AGI project express such plans; and in mainstream AI, human-comprehensibility is considered a nice feature, but rarely a necessary one.
It should finally be noted that AI famously does not result from generalizing normal software development. If you start with a map-route program and then try to program it to plan more and more things until it becomes an AI… you’re doomed, and all the experienced people know you’re doomed. I think there’s an entry or two in the old Jargon File aka Hacker’s Dictionary to this effect. There’s a qualitative jump to writing a different sort of software—from normal programming where you create a program conjugate to the problem you’re trying to solve, to AI where you try to solve cognitive-science problems so the AI can solve the object-level problem. I’ve personally met a programmer or two who’ve generalized their code in interesting ways, and who feel like they ought to be able to generalize it even further until it becomes intelligent. This is a famous illusion among aspiring young brilliant hackers who haven’t studied AI. Machine learning is a separate discipline and involves algorithms and problems that look quite different from “normal” programming.
Thanks for the response. My thoughts at this point are that
We seem to have differing views of how to best do what you call “reference class tennis” and how useful it can be. I’ll probably be writing about my views more in the future.
I find it plausible that AGI will have to follow a substantially different approach from “normal” software. But I’m not clear on the specifics of what SI believes those differences will be and why they point to the “proving safety/usefulness before running” approach over the “tool” approach.
We seem to have differing views of how frequently today’s software can be made comprehensible via interfaces. For example, my intuition is that the people who worked on the Netflix Prize algorithm had good interfaces for understanding “why” it recommends what it does, and used these to refine it. I may further investigate this matter (casually, not as a high priority); on SI’s end, it might be helpful (from my perspective) to provide detailed examples of existing algorithms for which the “tool” approach to development didn’t work and something closer to “proving safety/usefulness up front” was necessary.
Canonical software development examples emphasizing “proving safety/usefulness before running” over the “tool” software development approach are cryptographic libraries and NASA space shuttle navigation.
At the time of writing this comment, there was recent furor over software called CryptoCat that didn’t provide enough warnings that it was not properly vetted by cryptographers and thus should have been assumed to be inherently insecure. Conventional wisdom and repeated warnings from the security community state that cryptography is extremely difficult to do properly and attempting to create your own may result in catastrophic results. A similar thought and development process goes into space shuttle code.
It seems that the FAI approach to “proving safety/usefulness” is more similar to the way cryptographic algorithms are developed than the (seemingly) much faster “tool” approach, which is more akin to web development where the stakes aren’t quite as high.
EDIT: I believe the “prove” approach still allows one to run snippets of code in isolation, but tends to shy away from running everything end-to-end until significant effort has gone into individual component testing.
The analogy with cryptography is an interesting one, because...
In cryptography, even after you’ve proven that a given encryption scheme is secure, and that proof has been centuply (100 times) checked by different researchers at different institutions, it might still end up being insecure, for many reasons.
Examples of reasons include:
The proof assumed mathematical integers/reals, of which computer integers/floating point numbers are just an approximation.
The proof assumed that the hardware the algorithm would be running on was reliable (e.g. a reliable source of randomness).
The proof assumed operations were mathematical abstractions and thus exist out of time, and thus neglected side channel attacks which measures how long a physical real world CPU took to execute a the algorithm in order to make inferences as to what the algorithm did (and thus recover the private keys).
The proof assumed the machine executing the algorithm was idealized in various ways, when in fact a CPU emits heat other electromagnetic waves, which can be detected and from which inferences can be drawn, etc.
That’s one way to “win” a game of reference class tennis. Declare unilaterally that what you are discussing falls into the reference class “things that are most effectively reasoned about by discussing low level details and abandoning or ignoring all observed evidence about how things with various kinds of similarity have worked in the past”. Sure, it may lead to terrible predictions sometimes but by golly, it means you can score an ‘ace’ in the reference class tennis while pretending you are not even playing!
And atheism is a religion, and bald is a hair color.
The three distinguishing characteristics of “reference class tennis” are (1) that there are many possible reference classes you could pick and everyone engaging in the tennis game has their own favorite which is different from everyone else’s; (2) that the actual thing is obviously more dissimilar to all the cited previous elements of the so-called reference class than all those elements are similar to each other (if they even form a natural category at all rather than having being picked out retrospectively based on similarity of outcome to the preferred conclusion); and (3) that the citer of the reference class says it with a cognitive-traffic-signal quality which attempts to shut down any attempt to counterargue the analogy because “it always happens like that” or because we have so many alleged “examples” of the “same outcome” occurring (for Hansonian rationalists this is accompanied by a claim that what you are doing is the “outside view” (see point 2 and 1 for why it’s not) and that it would be bad rationality to think about the “individual details”).
I have also termed this Argument by Greek Analogy after Socrates’s attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.
For the curious, this is from the Phaedo pages 70-72. The run of the argument are basically thus:
P1 Natural changes are changes from and to opposites, like hot from relatively cold, etc.
P2 Since every change is between opposites A and B, there are two logically possible processes of change, namely A to B and B to A.
P3 If only one of the two processes were physically possible, then we should expect to see only one of the two opposites in nature, since the other will have passed away irretrievably.
P4 Life and death are opposites.
P5 We have experience of the process of death.
P6 We have experience of things which are alive
C From P3, 4, 5, and 6 there is a physically possible, and actual, process of going from death to life.
The argument doesn’t itself prove (haha) the immortality of the soul, only that living things come from dead things. The argument is made in support of the claim, made prior to this argument, that if living people come from dead people, then dead people must exist somewhere. The argument is particularly interesting for premises 1 and 2, which are hard to deny, and 3, which seems fallacious but for non-obvious reasons.
This sounds like it might be a bit of a reverent-Western-scholar steelman such as might be taught in modern philosophy classes; Plato’s original argument for the immortality of the soul sounded more like this, which is why I use it as an early exemplar of reference class tennis:
-
Then let us consider the whole question, not in relation to man only, but in relation to animals generally, and to plants, and to everything of which there is generation, and the proof will be easier. Are not all things which have opposites generated out of their opposites? I mean such things as good and evil, just and unjust—and there are innumerable other opposites which are generated out of opposites. And I want to show that in all opposites there is of necessity a similar alternation; I mean to say, for example, that anything which becomes greater must become greater after being less.
True.
And that which becomes less must have been once greater and then have become less.
Yes.
And the weaker is generated from the stronger, and the swifter from the slower.
Very true.
And the worse is from the better, and the more just is from the more unjust.
Of course.
And is this true of all opposites? and are we convinced that all of them are generated out of opposites?
Yes.
And in this universal opposition of all things, are there not also two intermediate processes which are ever going on, from one to the other opposite, and back again; where there is a greater and a less there is also an intermediate process of increase and diminution, and that which grows is said to wax, and that which decays to wane?
Yes, he said.
And there are many other processes, such as division and composition, cooling and heating, which equally involve a passage into and out of one another. And this necessarily holds of all opposites, even though not always expressed in words—they are really generated out of one another, and there is a passing or process from one to the other of them?
Very true, he replied.
Well, and is there not an opposite of life, as sleep is the opposite of waking?
True, he said.
And what is it?
Death, he answered.
And these, if they are opposites, are generated the one from the other, and have there their two intermediate processes also?
Of course.
Now, said Socrates, I will analyze one of the two pairs of opposites which I have mentioned to you, and also its intermediate processes, and you shall analyze the other to me. One of them I term sleep, the other waking. The state of sleep is opposed to the state of waking, and out of sleeping waking is generated, and out of waking, sleeping; and the process of generation is in the one case falling asleep, and in the other waking up. Do you agree?
I entirely agree.
Then, suppose that you analyze life and death to me in the same manner. Is not death opposed to life?
Yes.
And they are generated one from the other?
Yes.
What is generated from the living?
The dead.
And what from the dead?
I can only say in answer—the living.
Then the living, whether things or persons, Cebes, are generated from the dead?
That is clear, he replied.
Then the inference is that our souls exist in the world below?
That is true.
(etc.)
That was roughly my aim, but I don’t think I inserted any premises that weren’t there. Did you have a complaint about the accuracy of my paraphrase? The really implausible premise there, namely that death is the opposite of life, is preserved I think.
As for reverence, why not? He was, after all, the very first person in our historical record to suggest that thinking better might make you happier. He was also an intellectualist about morality, at least sometimes a hedonic utilitarian, and held no great respect for logic. And he was a skilled myth-maker. He sounds like a man after your own heart, actually.
I think your summary didn’t leave anything out, or even apply anything particularly charitable.
Esar’s summary doesn’t seem to be different from this, other than 1) adding the useful bit about “passed away irretrievably” and 2) yours makes it clear that the logical jump happens right at the end.
I’m actually not sure now why you consider this like “reference class tennis”. The argument looks fine, except for the part where “souls exist in the world below” jumps in as a conclusion, not having been mentioned earlier in the argument.
The ‘souls exist in the world below’ bit is directly before what Eliezer quoted:
But you’re right that nothing in the argument defends the idea of a world below, just that souls must exist in some way between bodies.
The argument omits that living things can come from living things and dead thingsfrom dead things
Therefore, the fact that living things can come from dead things does not mean that have to in every case.
Although, if everything started off dead, they must have at some point.
So it’s an argument for abiogenesis,
Not even that, at least in the part of the argument I’ve seen (paraphrased?) above.
He just mentions an ancient doctrine, and then claims that souls must exist somewhere while they’re not embodied, because he can’t imagine where they would come from otherwise. I’m not even sure if the ancient doctrine is meant as argument from authority or is just some sort of Chewbacca defense.
(He doesn’t seem to explicitly claim the “ancient doctrine” to be true or plausible, just that it came to his mind. It feels like I’ve lost something in the translation.)
Ok, it seems like under this definition of “reference class tennis” (particularly parts (2) and (3)) the participants must be wrong and behaving irrationality about it in order to be playing reference class tennis. So when they are either right or at least applying “outside view” considerations correctly, given all the information available to them they aren’t actually playing “reference class tennis” but instead doing whatever it is that reasoning (boundedly) correctly using reference to actual relevant evidence about related occurrences is called when it isn’t packaged with irrational wrongness.
With this definition in mind it is necessary to translate replies such as those here by Holden:
Holden’s meaning is, of course, not that that he argues is actually a good thing but rather declaring that the label doesn’t apply to what he is doing. He is instead doing that other thing that is actually sound thinking and thinks people are correct to do so.
Come to think of it if most people in Holden’s shoes heard Eliezer accuse them of “reference class tennis” and actually knew that he intended it with the meaning he explicitly defines here rather than the one they infer from context they would probably just consider him arrogant, rude and mind killed then write him and his organisation off as not worth engaging with.
In the vast majority of cases where I have previously seen Eliezer argue against people using “outside view” I have agreed with Eliezer, and have grown rather fond of using the phrase “reference class tennis” as a reply myself where appropriate. But seeing how far Eliezer has taken the anti-outside-view position here and the extent to which “reference class tennis” is defined as purely an anti-outside-view semantic stop sign I’ll be far more hesitant to make us of it myself.
It is tempting to observe “Eliezer is almost always right when he argues against ‘outside view’ applications, and the other people are all confused. He is currently arguing against ‘outside view’ applications. Therefore, the other people are probably confused.” To that I reply either “Reference class tennis!” or “F*$% you, I’m right and you’re wrong!” (I’m honestly not sure which is the least offensive.)
Which of 1, 2 and 3 do you disagree with in this case?
Edit: I mean, I’m sorry to parody but I don’t really want to carefully rehash the entire thing, so, from my perspective, Holden just said, “But surely strong AI will fall into the reference class of technology used to give users advice, just like Google Maps doesn’t drive your car; this is where all technology tends to go, so I’m really skeptical about discussing any other possibility.” Only Holden has argued to SI that strong AI falls into this particular reference class so far as I can recall, with many other people having their own favored reference classes e.g. Hanson et. al as cited above; a strong AI is far more internally dissimilar from Google Maps and Yelp than Google Maps and Yelp are internally similar to each other, plus there are many many other software programs that don’t provide advice at all so arguably the whole class may be chosen-post-facto; and I’d have to look up Holden’s exact words and replies to e.g. Jaan Tallinn to decide to what degree, if any, he used the analogy to foreclose other possibilities conversationally without further debate, but I do think it happened a little, but less so and less explicitly than in my Robin Hanson debate. If you don’t think I should at this point diverge into explaining the concept of “reference class tennis”, how should the conversation proceed further?
Also, further opinions desired on whether I was being rude, whether logically rude or otherwise.
Viewed charitably, you were not being rude, although you did veer away from your main point in ways likely to be unproductive. (For example, being unnecessarily dismissive towards Hanson, who you’d previously stated had given arguments roughly as good as Holden’s; or spending so much of your final paragraph emphasizing Holden’s lack of knowledge regarding AI.)
On the most likely viewing, it looks like you thought Holden was probably playing reference class tennis. This would have been rude, because it would imply that you thought the following inaccurate things about him:
He was “taking his reference class and going home”
That you can’t “have a back-and-forth conversation” with him
I don’t think that you intended those implications. All the same, your final comment came across as noticeably less well-written than your post.
Thanks for the third-party opinion!
I’m confused how you thought “reference class tennis” was anything but a slur on the other side’s argument. Likewise “mindkilled.” Sometimes, slurs about arguments are justified (agnostic in the instant case) - but that’s a separate issue.
Do Karnofsky’s contributions have even one of these characteristics, let alone all of them?
Empirically obviously 1 is true, I would argue strongly for 2 but it’s a legitimate point of dispute, and I would say that there were relatively small but still noticeable but quite forgiveable traces of 3.
Then it does seem like your AI arguments are playing reference class tennis with a reference class of “conscious beings”. For me, the force of the Tool AI argument is that there’s no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden’s Tool AI: you’d feed it data, it’d make predictions, you could choose to use the predictions.
(This is, naturally, the view of that school of AI implementers. Scott Brown: “People often seem to conflate having intelligence with having volition. Intelligence without volition is just information.”)
Your prospective AI plans for programmer-understandability seems very close to Starmap-AI by which I mean
The best story I’ve read about a not so failed utopia involves this kind of accountability over the FAI. While I hate to generalize from fictional evidence it definitely seems like a necessary step to not becoming a galaxy that tiles over the aliens with happy faces instead of just freezing them in place to prevent human harm.
Explaining routes is domain specific and quite simple. When you are using domain specific techniques to find solutions to domain specific problems, you can use domain specific interfaces where human programmers and designers do all the heavy lifting to figure out the general strategy of how to communicate to the user.
But if you want a tool AGI that finds solutions in arbitrary domains, you need a cross domain solution for communicating tool AGI’s plans to the user. This is as much a harder problem than showing a route on a map, as cross domain AGI is a harder problem than computing the routes. Instead of the programmer figuring out how to plot road tracing curves on a map, the programmer has to figure out how to get the computer to figure out that displaying a map with route traced over it is a useful thing to do, in a way that generalizes figuring out other useful things to do to communicate answers to other types of questions. And among the hard subproblems of programming computers to find useful things to do in general problems is specifying the meaning of “useful”. If that is done poorly, the tool AGI tries to trick the user into accepting plans that achieve some value negating distortion of what we actually want, instead of giving information that helps provide a good evaluation. Doing this right requires solving the same problems required to do FAI right.
To note something on making AIXI based tool: Instead of calculating rewards sum over the whole future (something that is simultaneously impractical, computationally expensive, and would only serve to impair performance on task at hand), one could use the single-step reward, with 1 for button being pressed any time and 0 for button not being pressed ever. It is still not entirely a tool, but it has very bounded range of unintended behaviour (much harder to speculate of the terminator scenario). In the Hutter’s paper he outlines several not-quite-intelligences before arriving at AIXI.
[edit2: also I do not believe that even with the large sum a really powerful AIXI-tl would be intelligently dangerous rather than simply clever at breaking the hardware that’s computing it. All the valid models in AIXI-tl that affect the choice of actions have to magically insert actions being probed into some kind of internal world model. The hardware that actually makes those actions, complete with sensory apparatus, is incidental; a useless power drain; a needless fire hazard endangering the precious reward pathway]
With regards to utility functions, the utility functions in the AI sense are real valued functions taken over the world model, not functions like number of paperclips in the world. The latter function, unsafe or safe, would be incredibly difficult or impossible to define using conventional methods. It would suffice for accelerating the progress to have an algorithm that can take in an arbitrary function and find it’s maximum; while it would indeed seem to be “very difficult” to use that to cure cancer, it could be plugged into existing models and very quickly be used to e.g. design cellular machinery that would keep repairing the DNA alterations.
Likewise, the speculative tool that can understand phrase ‘how to cure cancer’ and phrase ‘what is the curing time of epoxy’ would have to pick up most narrow least objectionable interpretation of the ‘cure cancer’ phrase to merely answer something more useful than ‘cancer is not a type of epoxy or glue, it does not cure’; it seems that not seeing killing everyone as a valid interpretation comes in as necessary consequence of ability to process language at all.
If the past sensory data include information about the internal workings, then there will be a striking correlation between the outputs that the workings would produce on their own (for physical reasons) and the AI’s outputs. That rules out (or drives down expected utility of acting upon) all but very crazy hypotheses about how the Cartesian interaction works. Wrecking the hardware would break that correlation, and it’s not clear what the crazy hypotheses would say about that, e.g. hypotheses that some simply specified intelligence is stage-managing the inputs, or that sometimes the AIXI-tl’s outputs matter, and other times only the physical hardware matters.
Well, you can’t include entire internal workings in the sensory data, and it can’t model significant portion of itself as it has to try big number of hypotheses on the model on each step, so I would not expect the very crazy hypotheses to be very elaborate and have high coverage of the internals.
If I closed my eyes and did not catch a ball, the explanation is that I did not see it coming and could not catch it, but this sentence is rife with self references of the sort that is problematic for AIXI. The correlation between closed eyes and lack of reward can be coded into some sort of magical craziness, but if I close my eyes and not my ears and hear where the ball lands after I missed catching it, there’s the vastly simpler explanation for why I did not catch it—my hand was not in the right spot (and that works with total absence of sensorium as well). I don’t see how AIXI-tl (with very huge constants) can value it’s eyesight (it might have some value if there is some asymmetric in the long models, but it seems clear it would not assign the adequate, rational value to it’s eyesight). In my opinion there is no single unifying principle to intelligence (or none was ever found), and AIXI-tl (with very huge constants) fails way short of even a cat in many important ways.
edit: Some other thought: I am not sure that Solomonoff induction’s prior is compatible with expected utility maximization. If the expected utility imbalance between crazy models grows faster than 2^length , and I would expect it to grow faster than any computable function (if the utility is unbounded), then the actions will be determined by imbalances between crazy, ultra long models. I would not privilege the belief that it just works without some sort of formal proof or some other very good reason to think it works.
Your question seems to be about how sentient beings in a Game of Life universe are supposed to define “gliders” to the AI.
1) If they know the true laws of their cellular automaton, they can make a UDT-ish AI that examines statements like “if this logical algorithm has such-and-such output, then my prior over starting configurations of the universe logically implies such-and-such total number of gliders”.
2) If they only know that their universe is some cellular automaton and have a prior over all possible automata, they can similarly say “maximize the number of smallest possible spaceships under the automaton rules” and give the AI some sensory channel wide enough to pin down the specific automaton with high probability.
3) If they only know what sensory experiences correspond to the existence of gliders, but don’t know what gliders are… I guess we have a problem because sensory experiences can be influenced by the AI :-(
Regarding #3: what happens given a directive like “Over there are a bunch of people who report sensory experiences of the kind I’m interested in. Figure out what differentially caused those experiences, and maximize the incidence of that.”?
(I’m not concerned with the specifics of my wording, which undoubtedly contains infinite loopholes; I’m asking about the general strategy of, when all I know is sensory experiences, referring to the differential causes of those experiences, whatever they may be. Which, yes, I would expect to include, in the case where there actually are no gliders and the recurring perception of gliders is the result of a glitch in my perceptual system, modifying my perceptual system to make such glitches more likely… but which I would not expect to include, in the case where my perceptual system is operating essentially the same way when it perceives gliders as when it perceives everything else, modifying my perceptual system to include such glitches (since such a glitch is not the differential cause of experiences of gliders in the first place.))
Let’s say you want the AI to maximize the amount of hydrogen, and you formulate the goal as “maximize the amount of the substance most likely referred to by such-and-such state of mind”, where “referred to” is cashed out however you like. Now imagine that some other substance is 10x cheaper to make than hydrogen. Then the AI could create a bunch of minds in the same state, just enough to re-point the “most likely” pointer to the new substance instead of hydrogen, leading to huge savings overall. Or it could do something even more subversive, my imagination is weak.
That’s what I was getting at, when I said a general problem with using sensory experiences as pointers is that the AI can influence sensory experiences.
Well, right, but my point is that “the thing which differentially caused the sensory experiences to which I refer” does not refer to the same thing as “the thing which would differentially cause similar sensory experiences in the future, after you’ve made your changes,” and it’s possible to specify the former rather than the latter.
The AI can influence sensory experiences, but it can’t retroactively influence sensory experiences. (Or, well, perhaps it can, but that’s a whole new dimension of subversive. Similarly, I suppose a sufficiently powerful optimizer could rewrite the automaton rules in case #2, so perhaps we have a similar problem there as well.)
You need to describe the sensory experience as part of the AI’s utility computation somehow. I thought it would be something like a bitstring representing a brain scan, which can refer to future experiences just as easily as past ones. Do you propose to include a timestamp? But the universe doesn’t seem to have a global clock. Or do you propose to say something like “the values of such-and such terms in the utility computation must be unaffected by the AI’s actions”? But we don’t know how to define “unaffected” mathematically...
I was thinking in terms of referring to a brain. Or, rather, a set of them. But a sufficiently detailed brainscan would work just as well, I suppose.
And, sure, the universe doesn’t have a clock, but a clock isn’t needed, simply an ordering: the AI attends to evidence about sensory experiences that occurred before the AI received the instruction.
Of course, maybe it is incapable of figuring out whether a given sensory experience occurred before it received the instruction… it’s just not smart enough. Or maybe the universe is weirder than I imagine, such that the order in which two events occur is not something the AI and I can actually agree on… which is the same case as “perhaps it can in fact retroactively influence sensory experiences” above.
I think LearnFun might be informative here. https://www.youtube.com/watch?v=xOCurBYI_gY
LearnFun watches a human play an arbitrary NES games. It is hardcoded to assume that as time progresses, the game is moving towards a “better and better” state (i.e. it assumes the player’s trying to win and is at least somewhat effective at achieving its goals). The key point here is that LearnFun does not know ahead of time what the objective of the game is. It infers what the objective of the game is from watching humans play. (More technically, it observes the entire universe, where the entire universe is defined to be the entire RAM content of the NES).
I think there’s some parallels here with your scenario where we don’t want to explicitly tell the AI what our utility function is. Instead, we’re pointing to a state, and we’re saying “This is a good state” (and I guess either we’d explicitly tell the AI “and this other state, it’s a bad state” or we assume the AI can somehow infer bad states to contrast the good states from), and then we ask the AI to come up with a plan (and possibly execute the plan) that would lead to “more good” states.
So what happens? Bit of a spoiler, but sometimes the AI seems to make a pretty good inference for what the utility function a human would probably have had for a given NES game, but sometimes it makes a terrible inference. It never seems to make a “perfect” inference: the even in its best performance, it seems to be optimizing very strange things.
The other part of it is that even if it does have a decent inference for the utility function, it’s not always good at coming up with a plan that will optimize that utility function.
I believe AIXI is much more inspectable than you make it out to be. I think it is important to challenge your claim here because Holden appears to have trusted your expertise and hereby concede an important part of the argument.
AIXI’s utility judgements are based a Solomonoff prior, which are based on the computer programs which return the input data. Computer programs are not black-boxes. A system implementing AIXI can easily also return a sample of typical expected future histories and the programs compressing these histories. By examining these programs, we can figure out what implicit model the AIXI system has of its world. These programs are optimized for shortness so they are likely to be very obfuscated, but I don’t expect them to be incomprehensible (after all, they’re not optimized for incomprehensibility). Even just sampling expected histories without their compressions is likely to be very informative. In the case of AIXItl the situation is better in the sense that it’s output at any give time is guaranteed to be generated by just one length <l subprogram, and this subprogram comes with a proof justifying its utility judgement. It’s also worse in that there is no way to sample its expected future histories. However, I expect the proof provided would implicitly contain such information. If either the programs or the proofs cannot be understood by humans, the programmers can just reject them and look at the next best candidates.
As for “What will be its effect on _?”, this can be answered as well. I already stated that with AIXI you can sample future histories. This is because AIXI has a specific known prior it implements for its future histories, namely Solomonoff induction. This ability may seem limited because it only shows the future sensory data, but sensory data can be whatever you feed AIXI as input. If you want it to a have a realistic model of the world, this includes a lot of relevant information. For example, if you feed it the entire database of Wikipedia, it can give likely future versions of Wikipedia which already provides a lot of details on the effect of its actions.
Can you be a bit more specific in your interpretation of AIXI here?
Here are my assumptions, let me know where you have different assumptions:
Traditional-AIXI is assumed to exists in the same universe as the human who wants to use AIXI to solve some problem.
Traditional-AIXI has a fixed input channel (e.g. it’s connected to a webcam, and/or it receives keyboard signals from the human, etc.)
Traditional-AIXI has a fixed output channel (e.g. it’s connected to a LCD monitor, or it can control a robot servo arm, or whatever).
The human has somehow pre-provided Traditional-AIXI with some utility function.
Traditional-AIXI operates in discrete time steps.
In the first timestep that elapses since Traditional-AIXI is activated, Traditional-AIXI examines the input it receives. It considers all possible programs that take pair (S, A) and emits an output P, where S is the prior state, A is an action to take, and P is the predicted output of taking the action A in state S. Then it discards all programs that would not have produced the input it received, regardless of what S or A it was given. Then it weighs the remaining program according to their Kolmorogov complexity. This is basically the Solomonoff induction step.
Now Traditional-AIXI has to make a decision about an output to generate. It considers all possible outputs it could produce, and feeds it to the programs under consideration, to produce a predicted next time step. Traditional-AIXI then calculates the expected utility of each output (using its pre-programmed utility function), picks the one with the highest utility, and emits that output. Note that it has no idea how any of its outputs would the universe, so this is essentially a uniformly random choice.
In the next timestep, Traditional-AIXI reads its inputs again, but this time taking into account what output it has generated in the previous step. It can now start to model correlation, and eventually causation, between its input and outputs. It has a previous state S and it knows what action A it took in its last step. It can further discard more programs, and narrow the possible models that describes the universe it finds itself in.
How does Tool-AIXI work in contrast to this? Holden seems to want to avoid having any utility function pre-defined at all. However, presumably Tool-AIXI still receives inputs and still produces outputs (probably Holden intends not to allow Tool-AIXI to control a robot servo arm, but he might intend for Tool-AIXI to be able to control an LCD monitor, or at the very least, produce some sort of text file as output).
Does Tool-AIXI proceed in discrete time steps gathering input? Or do we prevent Tool-AIXI from running until a user is ready to submit a curated input to Tool-AIXI? If the latter, how quickly to we expect Tool-AIXI to be able to formulate an reasonable model of our universe?
How does Tool-AIXI choose what output to produce, if there’s no utility function?
If we type in “Tool-AIXI, please give me a cure for cancer” onto a keyboard attached to Tool-AIXI and submit that as an input, do we think that a model that encodes ASCII, the English language, bio-organisms, etc. has a lower kolmogorov complexity than a model that says “we live in a universe where we receive exactly this hardcoded stream of bytes”?
Does Tool-AIXI model the output it produces (whether that be pixels on a screen, or bytes to a file) as an action, or does it somehow prevent itself from modelling its output as if it were an action that had some effect on the universe that it exists in? If the former, then isn’t this just an agenty Oracle AI? If the latter, then what kind of programs is it generate for its model (surely not programs that take (S, A) pairs as inputs, or else what would it use for A when evaluating its plans and predicting the future)?
Regarding the question of formalizing an optimization agent with goals defined in terms of external universe rather than sensory input. It is possible to attack the problem by generalizing the framework I described in http://lesswrong.com/lw/gex/save_the_princess_a_tale_of_aixi_and_utility/8ekk for solving the duality problem. Specifically, consider an “initial guess” stochastic model of the universe including the machine on which our agent is running. I call it the “innate model” M. Now consider a stochastic process with the same degrees of freedom as M but governed by the Solomonoff semi-measure. This is the “unbiased model” S. The two can be combined by assigning transition probabilities proportional to the product of the probabilities assigned by M and S. If M is sufficiently “insecure” (in particular it doesn’t assign 0 to any transition probability) then the resulting model S’, considered as prior, allows arriving at any computable model after sufficient learning. Fix a utility function on the space of histories of our model (note that the histories include both intrinsic and extrinsic degrees of freedom). The intelligence I(A) of any given agent A (= program written in M in the initial state) can now be defined to be the expected utility of A in S’. We can now consider optimal or near-optimal agents in this sense (as opposed to the Legg-Hutter formalism for measuring intelligence, there is no guarantee there is a maximum rather than a supremum; unless of course we limit the length of the programs we consider). This is a generalization of the Legg-Hutter formalism which accounts for limited computational resources, solves the duality problem (such agents take into account possibly wireheading) and also provides a solution for the ontology problem. This is essentially a special case of the Orseau-Ring framework. It is however much more specific than Orseau-Ring where the prior is left completely unspecified. You can think of it as a recipe for constructing Orseau-Ring priors from realistic problems
I realized that although the idea of a deformed Solomonoff semi-measure is correct, the multiplication prescription I suggested is rather ad hoc. The following construction is a much more natural and justifiable way of combining M and S.
Fix t0 a time parameter. Consider a stochastic process S(-t0) that begins at time t = -t0, where t = 0 is the time our agent A “forms”, governed by the Solomonoff semi-measure. Consider another stochastic process M(-t0) that begins from the initial conditions generated by S(-t0) (I’m assuming M only carries information about dynamics and not about initial conditions). Define S’ to be the conditional probability distribution obtained from S by two conditions:
a. S and M coincide on the time interval [-t0, 0]
b. The universe contains A at time t=0
Thus t0 reflects the extent to which we are certain about M: it’s like telling the agent we have been observing behavior M for time period t0.
There is an interesting side effect to this framework, namely that A can exert “acausal” influence on the utility by affecting the initial conditions of the universe (i.e. it selects universes in which A is likely to exist). This might seem like an artifact of the model but I think it might be a legitimate effect: if we believe in one-boxing in Newcomb’s paradox, why shouldn’t we accept such acausal effects?
For models with a concept of space and finite information velocity, like cellular automata, it might make sense to limit the domain of “observed M” in space as well as time, to A’s past “light-cone”
I cannot even slightly visualize what you mean by this. Please explain how it would be used to construct an AI that made glider-oids in a Life-like cellular automaton universe.
Is the AI hardware separate from the cellular automaton or is it a part of it? Assuming the latter, we need to decide which degrees of freedom of the cellular automaton form the program of our AI. For example we can select a finite set of cells and allow setting their values arbitrarily. Then we need to specify our utility function. For example it can be a weighted sum of the number of gliders at different moments of time, or a maximum or whatever. However we need to make sure the expectation values converge. Then the “AI” is simply the assignment of values to the selected cells in the initial state which yields the maximal expect utility. Note though that if we’re sure about the law governing the cellular automaton then there’s no reason to use the Solomonoff semi-measure at all (except maybe as a prior for the initial state outside the selected cells). However if our idea of the way the cellular automaton works is only an “initial guess” then the expectation value is evaluated w.r.t. a stochastic process governed by a “deformed Solomonoff” semi-measure in which transitions illegal w.r.t. assumed cellular automaton law are suppressed by some factor 0 < p < 1 w.r.t. “pure” Solomonoff inference. Note that, contrary to the case of AIXI, I can only describe the measure of intelligence, I cannot constructively describe the agent maximizing this measure. This is unsurprising since building a real (bounded computing resources) AI is a very difficult problem
It gets more interesting if the computing power is not unlimited but strictly smaller than that of the universe in which the agent is living (excluding the ridiculous ‘run sim since big bang and find yourself in it’ non-solution). Also, it is not only an open problem in the FAI, but also an open problem in the dangerous uFAI.
edit: actually I would search for general impossibility proofs at that point. Also, keep in mind that having ‘all possible models, weighted’ is the ideal Bayesian approach, so it may be the case that simply striving for the most correct way of acting upon uncertainty makes it impossible to care about any real world goals.
Also it is rather interesting how 1 sample into ‘unethical AI design space’ (AIXI) yielded something which, most likely, is fundamentally incapable of caring about a real world goal (but is still an incredibly powerful optimization process if given enough computing power edit: i.e. AIXI doesn’t care if you live or die but in a way quite different from a paperclip maximizer). In so much as one previously had an argument that such is incredibly unlikely, one ought to update and severely lower the probability of correctness of methods employed for generating that argument.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.