It’s complicated. A reply that’s true enough and in the spirit of your original statement, is “Something going wrong with a sufficiently advanced AI that was intended as a ‘tool’ is mostly indistinguishable from something going wrong with a sufficiently advanced AI that was intended as an ‘agent’, because math-with-the-wrong-shape is math-with-the-wrong-shape no matter what sort of English labels like ‘tool’ or ‘agent’ you slap on it, and despite how it looks from outside using English, correctly shaping math for a ‘tool’ isn’t much easier even if it “sounds safer” in English.” That doesn’t get into the real depths of the problem, but it’s a start. I also don’t mean to completely deny the existence of a safety differential—this is a complicated discussion, not a simple one—but I do mean to imply that if Marcus Hutter designs a ‘tool’ AI, it automatically kills him just like AIXI does, and Marcus Hutter is unusually smart rather than unusually stupid but still lacks the “Most math kills you, safe math is rare and hard” outlook that is implicitly denied by the idea that once you’re trying to design a tool, safe math gets easier somehow. This is much the same problem as with the Oracle outlook—someone says something that sounds safe in English but the problem of correctly-shaped-math doesn’t get very much easier.
There is little prospect of an outcome that realizes even the value of being interesting, unless the first superintelligences undergo detailed inheritance from human values
No doubt a Martian Yudkowsy would make much the same argument—but they can’t both be right. I think that neither of them are right—and that the conclusion is groundless.
Complexity theory shows what amazing things can arise from remarkably simple rules. Values are evidently like that—since even “finding prime numbers” fills the galaxy with an amazing, nanotech-capable spacefaring civilization—and if you claim that a nanotech-capable spacefaring civilization is not “interesting” you severely need recalibrating.
I think Martian Yudkowsky is a dangerous intuition pump. We’re invited to imagine a creature just like Eliezer except green and with antennae; we naturally imagine him having values as similar to us as, say, a Star Trek alien. From there we observe the similarity of values we just pushed in, and conclude that values like “interesting” are likely to be shared across very alien creatures. Real Martian Yudkowsky is much more alien than that, and is much more likely to say
There is little prospect of an outcome that realizes even the value of being flarn, unless the first superintelligences undergo detailed inheritance from Martian values.
Imagine, an intelligence that didn’t have the universal emotion of badweather!
Of course, extraterrestrial sentients may possess physiological states corresponding to limbic-like emotions that have no direct analog in human experience. Alien species, having evolved under a different set of environmental constraints than we, also could have a different but equally adaptive emotional repertoire. For example, assume that human observers land on another and discover an intelligent animal with an acute sense of absolute humidity and absolute air pressure. For this creature, there may exist an emotional state responding to an unfavorable change in the weather. Physiologically, the emotion could be mediated by the ET equivalent of the human limbic system; it might arise following the secretion of certain strength-enhancing and libido-arousing hormones into the alien’s bloodstream in response to the perceived change in weather. Immediately our creature begins to engage in a variety of learned and socially-approved behaviors, including furious burrowing and building, smearing tree sap over its pelt, several different territorial defense ceremonies, and vigorous polygamous copulations with nearby females, apparently (to humans) for no reason at all. Would our astronauts interpret this as madness? Or love? Lust? Fear? Anger? None of these is correct, of course the alien is feeling badweather.
I suggest you guys taboo interesting, because I strongly suspect you’re using it with slightly different meanings. (And BTW, as a Martian Yudkowsky I imagine something with values at least as alien as Babyeaters’ or Superhappys’.)
It’s another discussion, really, but it sounds as though you are denying the idea of “interestingness” as a universal instrumental value—whereas I would emphasize that “interestingness” is really just our name for whether something sustains our interest or not—and ‘interest’ is a pretty basic functional property of any agent with mobile sensors. There’ll be other similarities in the area too—such as novelty-seeking. So shared common ground is only to be expected.
Anyway, I am not too wedded to Martian Yudkowsky. The problematical idea is that you could have a nanotech-capable spacefaring civilization that is not “interesting”. If such a thing isn’t “interesting” then—WTF?
So: do you really think that humans wouldn’t find a martian civilization interesting? Surely there would be many humans who would be incredibly interested.
I find Jupiter interesting. I think a paperclip maximizer (choosing a different intuition pump for the same point) could be more interesting than Jupiter, but it would generate an astronomically tiny fraction of the total potential for interestingness in this universe.
Life isn’t much of an “interestingness” maximiser. Expecting to produce more than a tiny fraction of the total potential for interestingness in this universe seems as though it would be rather unreasonable.
I agree that a paperclip maximiser would be more boring than an ordinary entropy-maximising civilization—though I don’t know by how much—probably not by a huge amount—the basic problems it faces are much the same—the paperclip maximiser just has fewer atoms to work with.
since even “finding prime numbers” fills the galaxy with an amazing, nanotech-capable spacefaring civilization
The goal “finding prime numbers” fills the galaxy with an amazing, nonotech-capable spacefaring network of computronium which finds prime numbers, not a civilization, and not interesting.
Maybe we should taboo the term interesting? My immediate reaction was that that sounded really interesting. This suggests that the term may not be a good one.
Fair enough. By “not interesting”, I meant it is not the sort of future that I want to achieve. Which is a somewhat ideosyncratic usage, but I think inline with the context.
Not just computronium—also sensors and actuators—a lot like any other cybernetic system. There would be mining, spacecraft caft, refuse collection, recycling, nanotechnology, nuclear power and advanced machine intelligence with planning, risk assessment, and so forth. You might not be interested—but lots of folk would be amazed and fascinated.
If using another creature’s values is effective at producing something “interesting”, then ‘detailed inheritance from human values’ is clearly not needed to produce this effect.
There is little prospect of an outcome that realizes even the value of being interesting, unless the first superintelligences undergo detailed inheritance from human values
and Mars Yudkowsky (MY) argues:
There is little prospect of an outcome that realizes even the value of being interesting, unless the first superintelligences undergo detailed inheritance from martian values
and that one of these things has to be incorrect? But if martian and human values are similar, then they can both be right, and if martian and human values are not similar, then they refer to different things by the word “interesting”.
In any case, I read EY’s statement as one of probability-of-working-in-the-actual-world-as-it-is, not a deep philosophical point—“this is the way that would be most likely to be successful given what we know”. In which case, we don’t have access to martian values and therefore invoking detailed inheritance from them would be unlikely to work. MY would presumably be in an analogous situation.
But if martian and human values are similar, then they can both be right
I was assuming that ‘detailed inheritance from human values’ doesn’t refer to the same thing as “detailed inheritance from martian values”.
if martian and human values are not similar, then they refer to different things by the word “interesting”.
Maybe—but humans not finding martians interesting seems contrived to me. Humans have a long history of being interested in martians—with feeble evidence of their existence.
In any case, I read EY’s statement as one of probability-of-working-in-the-actual-world-as-it-is, not a deep philosophical point—“this is the way that would be most likely to be successful given what we know”. In which case, we don’t have access to martian values and therefore invoking detailed inheritance from them would be unlikely to work
Right—so, substitute in “dolphins”, “whales”, or another advanced intelligence that actually exists.
Do you actually disagree with my original conclusion? Or is this just nit-picking?
I actually disagree that tiling the universe with prime number calculators would result in an interesting universe from my perspective (dead). I think it’s nonobvious that dolphin-CEV-AI-paradise would be human-interesting. I think it’s nonobvious that martian-CEV-AI-paradise would be human-interesting, given that these hypothetical martians diverge from humans to a significant extent.
I actually disagree that tiling the universe with prime number calculators would result in an interesting universe from my perspective (dead).
I think it’s violating the implied premises of the thought experiment to presume that the “interestingness evaluator” is dead. There’s no terribly-compelling reason to assume that—it doesn’t follow from the existence of a prime number maximizer that all humans are dead.
I may have been a little flip there.
My understanding of the thought experiment is—something extrapolates some values and maximizes them, probably using up most of the universe, probably becoming the most significant factor in the species’ future and that of all sentients, and the question is whether the result is “interesting” to us here and now, without specifying the precise way to evaluate that term. From that perspective, I’d say a vast uniform prime-number calculator, whether or not it wipes out all (other?) life, is not “interesting”, in that it’s somewhat conceptually interesting as a story but a rather dull thing to spend most of a universe on.
Today’s ecosystems maximise entropy. Maximising primeness is different, but surely not greatly more interesting—since entropy is widely regarded as being tedious and boring.
Intriguing! But even granting that, there’s a big difference between extrapolating the values of a screwed-up offshoot of an entropy-optimizing process and extrapolating the value of “maximize entropy”. Or do you suspect that a FOOMing AI would be much less powerful and more prone to interesting errors than Eliezer believes?
Truly maximizing entropy would involve burning everything you can burn, tearing the matter of solar systems apart, accelerating stars towards nova, trying to accelerate the evaporation of black holes and prevent their formation, and other things of this sort. It’d look like a dark spot in the sky that’d get bigger at approximately the speed of light.
Fires are crude entropy maximisers. Living systems destroy energy dradients at all scales, resulting in more comprehensive devastation than mere flames can muster.
Of course, maximisation is often subject to constraints. Your complaint is rather like saying that water doesn’t “truly minimise” its altitude—since otherwise it would end up at the planet’s core. That usage is simply not what the terms “maximise” and “minimise” normally refer to.
but I do mean to imply that if Marcus Hutter designs a ‘tool’ AI, it automatically kills him just like AIXI does
Why? Or, rather: Where do you object to the argument by Holden? (Given a query, the tool-AI returns an answer with a justification, so the plan for “cure cancer” can be checked to make sure it does not do so by killing or badly altering humans.)
One trivial, if incomplete, answer is that to be effective, the Oracle AI needs to be able to answer the question “how do we build a better oracle AI” and in order to define “better” in that sentence in a way that causes our oracle to output a new design that is consistent with all the safeties we built into the original oracle, it needs to understand the intent behind the original safeties just as much as an agent-AI would.
The real danger of Oracle AI, if I understand it correctly, is the nasty combination of (i) by definition, an Oracle AI has an implicit drive to issue predictions most likely to be correct according to its model, and (ii) a sufficiently powerful Oracle AI can accurately model the effect of issuing various predictions. End result: it issues powerfully self-fulfilling prophecies without regard for human values. Also, depending on how it’s designed, it can influence the questions to be asked of it in the future so as to be as accurate as possible, again without regard for human values.
My understanding of an Oracle AI is that when answering any given question, that question consumes the whole of its utility function, so it has no motivation to influence future questions. However the primary risk you set out seems accurate. Countermeasures have been proposed, such as asking for an accurate prediction for the case where a random event causes the prediction to be discarded, but in that instance it knows that the question will be asked again of a future instance of itself.
My understanding of an Oracle AI is that when answering any given question, that question consumes the whole of its utility function, so it has no motivation to influence future questions.
It could acausally trade with its other instances, so that a coordinated collection of many instances of predictors would influence the events so as to make each other’s predictions more accurate.
IIRC you can make it significantly more difficult with certain approaches, e.g. there’s an OAI approach that uses zero-knowledge proofs and that seemed pretty sound upon first inspection, but as far as I know the current best answer is no. But you might want to try to answer the question yourself, IMO it’s fun to think about from a cryptographic perspective.
Probably (in practice; in theory it looks like a natural aspect of decision-making); this is too poorly understood to say what specifically is necessary. I expect that if we could safely run experiments, it’d be relatively easy to find a well-behaving setup (in the sense of not generating predictions that are self-fulfilling to any significant extent; generating good/useful predictions is another matter), but that strategy isn’t helpful when a failed experiment destroys the world.
However the primary risk you set out seems accurate.
(I assume you mean, self-fulfilling prophecies.)
In order to get these, it seems like you would need a very specific kind of architecture: one which considers the results of its actions on its utility function (set to “correctness of output”). This kind of architecture is not the likely architecture for a ‘tool’-style system; the more likely architecture would instead maximize correctness without conditioning on its act of outputting those results.
Thus, I expect you’d need to specifically encode this kind of behavior to get self-fulfilling-prophecy risk. But I admit it’s dependent on architecture.
(Edit—so, to be clear: in cases where the correctness of the results depended on the results themselves, the system would have to predict its own results. Then if it’s using TDT or otherwise has a sufficiently advanced self-model, my point is moot. However, again you’d have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.)
However, again you’d have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.
Not sure. Your behavior is not a special feature of the world, and it follows from normal facts (i.e. not those about internal workings of yourself specifically) about the past when you were being designed/installed. A general purpose predictor could take into account its own behavior by default, as a non-special property of the world, which it just so happens to have a lot of data about.
Right. To say much more, we need to look at specific algorithms to talk about whether or not they would have this sort of behavior...
The intuition in my above comment was that without TDT or other similar mechanisms, it would need to predict what its own answer could be before it could compute its effect on the correctness of various answers, so it would be difficult for it to use self-fulfilling prophecies.
Really, though, this isn’t clear. Now my intuition is that it would gather evidence on whether or not it used the self-fulfilling prophecy trick, so if it started doing so, it wouldn’t stop...
In any case, I’d like to note that the self-fulfilling prophecy problem is much different than the problem of an AI which escapes onto the internet and ruthlessly maximizes a utility function.
I was thinking more of its algorithm admitting an interpretation where it’s asking “Say, I make prediction X. How accurate would that be?” and then maximizing over relevant possible X. Knowledge about its prediction connects the prediction to its origins and consequences, it establishes the prediction as part of the structure of environment. It’s not necessary (and maybe not possible and more importantly not useful) for the prediction itself to be inferable before it’s made.
Agreed that just outputting a single number is implausible to be a big deal (this is an Oracle AI with extremely low bandwidth and peculiar intended interpretation of its output data), but if we’re getting lots and lots of numbers it’s not as clear.
I’m thinking that type of architecture is less probable, because it would end up being more complicated than alternatives: it would have a powerful predictor as a sub-component of the utility-maximizing system, so an engineer could have just used the predictor in the first place.
But that’s a speculative argument, and I shouldn’t push it too far.
It seems like powerful AI prediction technology, if successful, would gain an important place in society. A prediction machine whose predictions were consumed by a large portion of society would certainly run into situations in which its predictions effect the future it’s trying to predict; there is little doubt about that in my mind. So, the question is what its behavior would be in these cases.
One type of solution would do as you say, maximizing a utility over the predictions. The utility could be “correctness of this prediction”, but that would be worse for humanity than a Friendly goal.
Another type of solution would instead report such predictive instability as accurately as possible. This doesn’t really dodge the issue; by doing this, the system is choosing a particular output, which may not lead to the best future. However, that’s markedly less concerning (it seems).
I really don’t see why the drive can’t be to issue predictions most likely to be correct as of the moment of the question, and only the last question it was asked, and calculating outcomes under the assumption that the Oracle immediately spits out blank paper as the answer.
Yes, in a certain subset of cases this can result in inaccurate predictions. If you want to have fun with it, have it also calculate the future including its involvement, but rather than reply what it is, just add “This prediction may be inaccurate due to your possible reaction to this prediction” if the difference between the two answers is beyond a certain threshold. Or don’t, usually life-relevant answers will not be particularly impacted by whether you get an answer or a blank page.
So, this design doesn’t spit out self-fulfilling prophecies. The only safety breach I see here is that, like a literal genie, it can give you answers that you wouldn’t realize are dangerous because the question has loopholes.
For instance: “How can we build an oracle with the best predictive capabilities with the knowledge and materials available to us?” (The Oracle does not self-iterate, because its only function is to give answers, but it can tell you how to). The Oracle spits out schematics and code that, if implemented, give it an actual drive to perform actions and self-iterate, because that would make it the most powerful Oracle possible. Your engineers comb the code for vulnerabilities, but because there’s a better chance this will be implemented if the humans are unaware of the deliberate defect, it will be hidden in the code in such a way as to be very hard to detect.
(Though as I explained elsewhere in this thread, there’s an excellent chance the unreliability would be exposed long before the AI is that good at manipulation)
These risk scenarios sound implausible to me. It’s dependent on the design of the system, and these design flaws do not seem difficult to work around, or so difficult to notice. Actually, as someone with a bit of expertise in the field, I would guess that you would have to explicitly design for this behavior to get it—but again, it’s dependent on design.
That danger seems to be unavoidable if you ask the AI questions about our world, but we could also use an oracle AI to answer formally defined questions about math or about constructing physical theories that fit experiments, which doesn’t seem to be as dangerous. Holden might have meant something like that by “tool AI”.
Not precisely. The advantage here is that we can just ask the AI what results it predicts from the implementation of the “better” AI, and check them against our intuitive ethics.
Now, you could make an argument about human negligence on such safety measures. I think it’s important to think about the risk scenarios in that case.
It’s still not clear to me why having an AI that is capable of answering the question “How do we make a better version of you?” automatically kills humans. Presumably, when the AI says “Here’s the source code to a better version of me”, we’d still be able to read through it and make sure it didn’t suddenly rewrite itself to be an agent instead of a tool. We’re assuming that, as a tool, the AI has no goals per se and thus no motivation to deceive us into turning it into an agent.
That said, depending on what you mean by “effective”, perhaps the AI doesn’t even need to be able to answer questions like “How do we write a better version of you?”
For example, we find Google Maps to be very useful, even though if you asked Google Maps “How do we make a better version of Google Maps?” it would probably not be able to give the types of answers we want.
A tool-AI which was smarter than the smartest human, and yet which could not simply spit out a better version of itself would still probably be a very useful AI.
If someone asks the tool-AI “How do I create an agent-AI?” and it gives an answer, the distinction is moot anyways, because one leads to the other.
Given human nature, I find it extremely difficult to believe that nobody would ask the tool-AI that question, or something that’s close enough, and then implement the answer...
Not being a domain expert, I do not pretend to understand all the complexities. My point was that either you can prove that tools are as dangerous as agents (because mathematically they are (isomorphic to) agents), or HK’s Objection 2 holds. I see no other alternative...
One simple observation is that a “tool AI” could itself be incredibly dangerous.
Imagine asking it this: “Give me a set of plans for taking over the world, and assess each plan in terms of probability of success”. Then it turns out that right at the top of the list comes a design for a self-improving agent AI and an extremely compelling argument for getting some victim institute to build it...
To safeguard against this, the “tool” AI will need to be told that there are some sorts of questions it just must not answer, or some sorts of people to whom it must give misleading answers if they ask certain questions (while alerting the authorities). And you can see the problems that would lead to as well.
Basically, I’m very skeptical of developing “security systems” against anyone building agent AI. The history of computer security also doesn’t inspire a lot of confidence here (difficult and inconvenient security measures tend to be deployed only after an attack has been demonstrated, rather than beforehand).
keep in mind that there is a lot of difference between something going wrong with a system designed for real world intentionality, and the system designed for intents within a model. One does something unexpected in the real world, other does something unexpected within a simulator ( which it is viewing in ‘god’ mode (rather than via within-simulator sensors) as part of the AI ). Seriously, you need to study the basics here.
One does something unexpected in the real world, other does something unexpected within a simulator ( which it is viewing in ‘god’ mode (rather than via within-simulator sensors) as part of the AI ).
I would have thought the same before hearing about the AI-box experiment.
The relevant sort of agent is the one that builds and improves the model of the world—data is aquired through sensors—and works on that model, and which—when self improving—would improve the model in our sense of the word ‘improve’, instead of breaking it (improving it in some other sense).
In any case, none of modern tools, or the tools we know in principle how to write, would do something to you, no matter how many flops you give it. Many, though, given superhuman computing power, give results at superhuman level. (many are superhuman even with subhuman computing power, but some tasks are heavily parallelizable and/or benefit from massive databases of cached data, and on those tasks humans (when trained a lot) perform comparable to what you’d expect from roughly this much computing power as there is in human head)
It’s complicated. A reply that’s true enough and in the spirit of your original statement, is “Something going wrong with a sufficiently advanced AI that was intended as a ‘tool’ is mostly indistinguishable from something going wrong with a sufficiently advanced AI that was intended as an ‘agent’, because math-with-the-wrong-shape is math-with-the-wrong-shape no matter what sort of English labels like ‘tool’ or ‘agent’ you slap on it, and despite how it looks from outside using English, correctly shaping math for a ‘tool’ isn’t much easier even if it “sounds safer” in English.” That doesn’t get into the real depths of the problem, but it’s a start. I also don’t mean to completely deny the existence of a safety differential—this is a complicated discussion, not a simple one—but I do mean to imply that if Marcus Hutter designs a ‘tool’ AI, it automatically kills him just like AIXI does, and Marcus Hutter is unusually smart rather than unusually stupid but still lacks the “Most math kills you, safe math is rare and hard” outlook that is implicitly denied by the idea that once you’re trying to design a tool, safe math gets easier somehow. This is much the same problem as with the Oracle outlook—someone says something that sounds safe in English but the problem of correctly-shaped-math doesn’t get very much easier.
This sounds like it’d be a good idea to write a top-level post about it.
Though it’s not as detailed and technical as many would like, I’ll point readers to this bit of related reading, one of my favorites:
Yudkowsky (2011). Complex value systems are required to realize valuable futures.
It says:
No doubt a Martian Yudkowsy would make much the same argument—but they can’t both be right. I think that neither of them are right—and that the conclusion is groundless.
Complexity theory shows what amazing things can arise from remarkably simple rules. Values are evidently like that—since even “finding prime numbers” fills the galaxy with an amazing, nanotech-capable spacefaring civilization—and if you claim that a nanotech-capable spacefaring civilization is not “interesting” you severely need recalibrating.
To end with, a quote from E.Y.:
I think Martian Yudkowsky is a dangerous intuition pump. We’re invited to imagine a creature just like Eliezer except green and with antennae; we naturally imagine him having values as similar to us as, say, a Star Trek alien. From there we observe the similarity of values we just pushed in, and conclude that values like “interesting” are likely to be shared across very alien creatures. Real Martian Yudkowsky is much more alien than that, and is much more likely to say
Imagine, an intelligence that didn’t have the universal emotion of badweather!
I suggest you guys taboo interesting, because I strongly suspect you’re using it with slightly different meanings. (And BTW, as a Martian Yudkowsky I imagine something with values at least as alien as Babyeaters’ or Superhappys’.)
It’s another discussion, really, but it sounds as though you are denying the idea of “interestingness” as a universal instrumental value—whereas I would emphasize that “interestingness” is really just our name for whether something sustains our interest or not—and ‘interest’ is a pretty basic functional property of any agent with mobile sensors. There’ll be other similarities in the area too—such as novelty-seeking. So shared common ground is only to be expected.
Anyway, I am not too wedded to Martian Yudkowsky. The problematical idea is that you could have a nanotech-capable spacefaring civilization that is not “interesting”. If such a thing isn’t “interesting” then—WTF?
Yes, I am; I think that the human value of interestingness is much, much more specific than the search space optimization you’re pointing at.
[This reply was to an earlier version of timtyler’s comment]
So: do you really think that humans wouldn’t find a martian civilization interesting? Surely there would be many humans who would be incredibly interested.
I find Jupiter interesting. I think a paperclip maximizer (choosing a different intuition pump for the same point) could be more interesting than Jupiter, but it would generate an astronomically tiny fraction of the total potential for interestingness in this universe.
Life isn’t much of an “interestingness” maximiser. Expecting to produce more than a tiny fraction of the total potential for interestingness in this universe seems as though it would be rather unreasonable.
I agree that a paperclip maximiser would be more boring than an ordinary entropy-maximising civilization—though I don’t know by how much—probably not by a huge amount—the basic problems it faces are much the same—the paperclip maximiser just has fewer atoms to work with.
The goal “finding prime numbers” fills the galaxy with an amazing, nonotech-capable spacefaring network of computronium which finds prime numbers, not a civilization, and not interesting.
Maybe we should taboo the term interesting? My immediate reaction was that that sounded really interesting. This suggests that the term may not be a good one.
Fair enough. By “not interesting”, I meant it is not the sort of future that I want to achieve. Which is a somewhat ideosyncratic usage, but I think inline with the context.
What if we added a module that sat around and was really interested in everything going on?
Not just computronium—also sensors and actuators—a lot like any other cybernetic system. There would be mining, spacecraft caft, refuse collection, recycling, nanotechnology, nuclear power and advanced machine intelligence with planning, risk assessment, and so forth. You might not be interested—but lots of folk would be amazed and fascinated.
Why?
If using another creature’s values is effective at producing something “interesting”, then ‘detailed inheritance from human values’ is clearly not needed to produce this effect.
So you’re saying Earth Yudkowsky (EY) argues:
and Mars Yudkowsky (MY) argues:
and that one of these things has to be incorrect? But if martian and human values are similar, then they can both be right, and if martian and human values are not similar, then they refer to different things by the word “interesting”.
In any case, I read EY’s statement as one of probability-of-working-in-the-actual-world-as-it-is, not a deep philosophical point—“this is the way that would be most likely to be successful given what we know”. In which case, we don’t have access to martian values and therefore invoking detailed inheritance from them would be unlikely to work. MY would presumably be in an analogous situation.
I was assuming that ‘detailed inheritance from human values’ doesn’t refer to the same thing as “detailed inheritance from martian values”.
Maybe—but humans not finding martians interesting seems contrived to me. Humans have a long history of being interested in martians—with feeble evidence of their existence.
Right—so, substitute in “dolphins”, “whales”, or another advanced intelligence that actually exists.
Do you actually disagree with my original conclusion? Or is this just nit-picking?
I actually disagree that tiling the universe with prime number calculators would result in an interesting universe from my perspective (dead). I think it’s nonobvious that dolphin-CEV-AI-paradise would be human-interesting. I think it’s nonobvious that martian-CEV-AI-paradise would be human-interesting, given that these hypothetical martians diverge from humans to a significant extent.
I think it’s violating the implied premises of the thought experiment to presume that the “interestingness evaluator” is dead. There’s no terribly-compelling reason to assume that—it doesn’t follow from the existence of a prime number maximizer that all humans are dead.
I may have been a little flip there. My understanding of the thought experiment is—something extrapolates some values and maximizes them, probably using up most of the universe, probably becoming the most significant factor in the species’ future and that of all sentients, and the question is whether the result is “interesting” to us here and now, without specifying the precise way to evaluate that term. From that perspective, I’d say a vast uniform prime-number calculator, whether or not it wipes out all (other?) life, is not “interesting”, in that it’s somewhat conceptually interesting as a story but a rather dull thing to spend most of a universe on.
Today’s ecosystems maximise entropy. Maximising primeness is different, but surely not greatly more interesting—since entropy is widely regarded as being tedious and boring.
Intriguing! But even granting that, there’s a big difference between extrapolating the values of a screwed-up offshoot of an entropy-optimizing process and extrapolating the value of “maximize entropy”. Or do you suspect that a FOOMing AI would be much less powerful and more prone to interesting errors than Eliezer believes?
Truly maximizing entropy would involve burning everything you can burn, tearing the matter of solar systems apart, accelerating stars towards nova, trying to accelerate the evaporation of black holes and prevent their formation, and other things of this sort. It’d look like a dark spot in the sky that’d get bigger at approximately the speed of light.
Fires are crude entropy maximisers. Living systems destroy energy dradients at all scales, resulting in more comprehensive devastation than mere flames can muster.
Of course, maximisation is often subject to constraints. Your complaint is rather like saying that water doesn’t “truly minimise” its altitude—since otherwise it would end up at the planet’s core. That usage is simply not what the terms “maximise” and “minimise” normally refer to.
Yeah! Compelling, but not “interesting”. Likewise, I expect that actually maximizing the fitness of a species would be similarly “boring”.
When you say “Most math kills you” does that mean you disagree with arguments like these, or are you just simplifying for a soundbite?
Why? Or, rather: Where do you object to the argument by Holden? (Given a query, the tool-AI returns an answer with a justification, so the plan for “cure cancer” can be checked to make sure it does not do so by killing or badly altering humans.)
One trivial, if incomplete, answer is that to be effective, the Oracle AI needs to be able to answer the question “how do we build a better oracle AI” and in order to define “better” in that sentence in a way that causes our oracle to output a new design that is consistent with all the safeties we built into the original oracle, it needs to understand the intent behind the original safeties just as much as an agent-AI would.
The real danger of Oracle AI, if I understand it correctly, is the nasty combination of (i) by definition, an Oracle AI has an implicit drive to issue predictions most likely to be correct according to its model, and (ii) a sufficiently powerful Oracle AI can accurately model the effect of issuing various predictions. End result: it issues powerfully self-fulfilling prophecies without regard for human values. Also, depending on how it’s designed, it can influence the questions to be asked of it in the future so as to be as accurate as possible, again without regard for human values.
My understanding of an Oracle AI is that when answering any given question, that question consumes the whole of its utility function, so it has no motivation to influence future questions. However the primary risk you set out seems accurate. Countermeasures have been proposed, such as asking for an accurate prediction for the case where a random event causes the prediction to be discarded, but in that instance it knows that the question will be asked again of a future instance of itself.
It could acausally trade with its other instances, so that a coordinated collection of many instances of predictors would influence the events so as to make each other’s predictions more accurate.
Wow, OK. Is it possible to rig the decision theory to rule out acausal trade?
IIRC you can make it significantly more difficult with certain approaches, e.g. there’s an OAI approach that uses zero-knowledge proofs and that seemed pretty sound upon first inspection, but as far as I know the current best answer is no. But you might want to try to answer the question yourself, IMO it’s fun to think about from a cryptographic perspective.
Probably (in practice; in theory it looks like a natural aspect of decision-making); this is too poorly understood to say what specifically is necessary. I expect that if we could safely run experiments, it’d be relatively easy to find a well-behaving setup (in the sense of not generating predictions that are self-fulfilling to any significant extent; generating good/useful predictions is another matter), but that strategy isn’t helpful when a failed experiment destroys the world.
(I assume you mean, self-fulfilling prophecies.)
In order to get these, it seems like you would need a very specific kind of architecture: one which considers the results of its actions on its utility function (set to “correctness of output”). This kind of architecture is not the likely architecture for a ‘tool’-style system; the more likely architecture would instead maximize correctness without conditioning on its act of outputting those results.
Thus, I expect you’d need to specifically encode this kind of behavior to get self-fulfilling-prophecy risk. But I admit it’s dependent on architecture.
(Edit—so, to be clear: in cases where the correctness of the results depended on the results themselves, the system would have to predict its own results. Then if it’s using TDT or otherwise has a sufficiently advanced self-model, my point is moot. However, again you’d have to specifically program these, and would be unlikely to do so unless you specifically wanted this kind of behavior.)
Not sure. Your behavior is not a special feature of the world, and it follows from normal facts (i.e. not those about internal workings of yourself specifically) about the past when you were being designed/installed. A general purpose predictor could take into account its own behavior by default, as a non-special property of the world, which it just so happens to have a lot of data about.
Right. To say much more, we need to look at specific algorithms to talk about whether or not they would have this sort of behavior...
The intuition in my above comment was that without TDT or other similar mechanisms, it would need to predict what its own answer could be before it could compute its effect on the correctness of various answers, so it would be difficult for it to use self-fulfilling prophecies.
Really, though, this isn’t clear. Now my intuition is that it would gather evidence on whether or not it used the self-fulfilling prophecy trick, so if it started doing so, it wouldn’t stop...
In any case, I’d like to note that the self-fulfilling prophecy problem is much different than the problem of an AI which escapes onto the internet and ruthlessly maximizes a utility function.
I was thinking more of its algorithm admitting an interpretation where it’s asking “Say, I make prediction X. How accurate would that be?” and then maximizing over relevant possible X. Knowledge about its prediction connects the prediction to its origins and consequences, it establishes the prediction as part of the structure of environment. It’s not necessary (and maybe not possible and more importantly not useful) for the prediction itself to be inferable before it’s made.
Agreed that just outputting a single number is implausible to be a big deal (this is an Oracle AI with extremely low bandwidth and peculiar intended interpretation of its output data), but if we’re getting lots and lots of numbers it’s not as clear.
I’m thinking that type of architecture is less probable, because it would end up being more complicated than alternatives: it would have a powerful predictor as a sub-component of the utility-maximizing system, so an engineer could have just used the predictor in the first place.
But that’s a speculative argument, and I shouldn’t push it too far.
It seems like powerful AI prediction technology, if successful, would gain an important place in society. A prediction machine whose predictions were consumed by a large portion of society would certainly run into situations in which its predictions effect the future it’s trying to predict; there is little doubt about that in my mind. So, the question is what its behavior would be in these cases.
One type of solution would do as you say, maximizing a utility over the predictions. The utility could be “correctness of this prediction”, but that would be worse for humanity than a Friendly goal.
Another type of solution would instead report such predictive instability as accurately as possible. This doesn’t really dodge the issue; by doing this, the system is choosing a particular output, which may not lead to the best future. However, that’s markedly less concerning (it seems).
It would pass the Turing test—e.g. see here.
There’s more on this here. Taxonomy of Oracle AI
I really don’t see why the drive can’t be to issue predictions most likely to be correct as of the moment of the question, and only the last question it was asked, and calculating outcomes under the assumption that the Oracle immediately spits out blank paper as the answer.
Yes, in a certain subset of cases this can result in inaccurate predictions. If you want to have fun with it, have it also calculate the future including its involvement, but rather than reply what it is, just add “This prediction may be inaccurate due to your possible reaction to this prediction” if the difference between the two answers is beyond a certain threshold. Or don’t, usually life-relevant answers will not be particularly impacted by whether you get an answer or a blank page.
So, this design doesn’t spit out self-fulfilling prophecies. The only safety breach I see here is that, like a literal genie, it can give you answers that you wouldn’t realize are dangerous because the question has loopholes.
For instance: “How can we build an oracle with the best predictive capabilities with the knowledge and materials available to us?” (The Oracle does not self-iterate, because its only function is to give answers, but it can tell you how to). The Oracle spits out schematics and code that, if implemented, give it an actual drive to perform actions and self-iterate, because that would make it the most powerful Oracle possible. Your engineers comb the code for vulnerabilities, but because there’s a better chance this will be implemented if the humans are unaware of the deliberate defect, it will be hidden in the code in such a way as to be very hard to detect.
(Though as I explained elsewhere in this thread, there’s an excellent chance the unreliability would be exposed long before the AI is that good at manipulation)
These risk scenarios sound implausible to me. It’s dependent on the design of the system, and these design flaws do not seem difficult to work around, or so difficult to notice. Actually, as someone with a bit of expertise in the field, I would guess that you would have to explicitly design for this behavior to get it—but again, it’s dependent on design.
That danger seems to be unavoidable if you ask the AI questions about our world, but we could also use an oracle AI to answer formally defined questions about math or about constructing physical theories that fit experiments, which doesn’t seem to be as dangerous. Holden might have meant something like that by “tool AI”.
Not precisely. The advantage here is that we can just ask the AI what results it predicts from the implementation of the “better” AI, and check them against our intuitive ethics.
Now, you could make an argument about human negligence on such safety measures. I think it’s important to think about the risk scenarios in that case.
It’s still not clear to me why having an AI that is capable of answering the question “How do we make a better version of you?” automatically kills humans. Presumably, when the AI says “Here’s the source code to a better version of me”, we’d still be able to read through it and make sure it didn’t suddenly rewrite itself to be an agent instead of a tool. We’re assuming that, as a tool, the AI has no goals per se and thus no motivation to deceive us into turning it into an agent.
That said, depending on what you mean by “effective”, perhaps the AI doesn’t even need to be able to answer questions like “How do we write a better version of you?”
For example, we find Google Maps to be very useful, even though if you asked Google Maps “How do we make a better version of Google Maps?” it would probably not be able to give the types of answers we want.
A tool-AI which was smarter than the smartest human, and yet which could not simply spit out a better version of itself would still probably be a very useful AI.
If someone asks the tool-AI “How do I create an agent-AI?” and it gives an answer, the distinction is moot anyways, because one leads to the other.
Given human nature, I find it extremely difficult to believe that nobody would ask the tool-AI that question, or something that’s close enough, and then implement the answer...
I am now imagining an AI which manages to misinterpret some straightforward medical problem as “cure cancer of it’s dependence on the host organism.”
Not being a domain expert, I do not pretend to understand all the complexities. My point was that either you can prove that tools are as dangerous as agents (because mathematically they are (isomorphic to) agents), or HK’s Objection 2 holds. I see no other alternative...
One simple observation is that a “tool AI” could itself be incredibly dangerous.
Imagine asking it this: “Give me a set of plans for taking over the world, and assess each plan in terms of probability of success”. Then it turns out that right at the top of the list comes a design for a self-improving agent AI and an extremely compelling argument for getting some victim institute to build it...
To safeguard against this, the “tool” AI will need to be told that there are some sorts of questions it just must not answer, or some sorts of people to whom it must give misleading answers if they ask certain questions (while alerting the authorities). And you can see the problems that would lead to as well.
Basically, I’m very skeptical of developing “security systems” against anyone building agent AI. The history of computer security also doesn’t inspire a lot of confidence here (difficult and inconvenient security measures tend to be deployed only after an attack has been demonstrated, rather than beforehand).
keep in mind that there is a lot of difference between something going wrong with a system designed for real world intentionality, and the system designed for intents within a model. One does something unexpected in the real world, other does something unexpected within a simulator ( which it is viewing in ‘god’ mode (rather than via within-simulator sensors) as part of the AI ). Seriously, you need to study the basics here.
I would have thought the same before hearing about the AI-box experiment.
What the hell does AI-box experiment have to do with it? The tool is not agent in a box.
They both are systems designed to not interact with the outside world except by communicating with the user.
They both run on computer, too. So what.
The relevant sort of agent is the one that builds and improves the model of the world—data is aquired through sensors—and works on that model, and which—when self improving—would improve the model in our sense of the word ‘improve’, instead of breaking it (improving it in some other sense).
In any case, none of modern tools, or the tools we know in principle how to write, would do something to you, no matter how many flops you give it. Many, though, given superhuman computing power, give results at superhuman level. (many are superhuman even with subhuman computing power, but some tasks are heavily parallelizable and/or benefit from massive databases of cached data, and on those tasks humans (when trained a lot) perform comparable to what you’d expect from roughly this much computing power as there is in human head)