That sort of scope is not likely to be a problem. The difficulty is that you have to get every part of the specification and every part of the specification executer exactly right...
And I was arguing that any given AI won’t be able to self-improve without an exact specification of its output against which it can judge its own efficiency. That’s why I don’t see how it would be likely to be able to implement such exact specifications but yet fail to limit its scope of space, time and resources. What makes it even more unlikely in my opinion is that an AI won’t care to output anything as long as it isn’t explicitly told to do so. Where would that incentive come from?
… will quite probably wipe out humanity unless a significant proportion of what it takes to produce an FAI is implemented. And it will do it while (and for the purpose of) creating 10 paperclips per day.
You assume that it knows that it is supposed to use all of science and the universe to self-improve when it would very likely just self-improve to the extent that it is told and don’t care to go any further. That is for example software-optimization. I just don’t see why you think that any artificial general intelligence would automatically assume that it would have to understand the whole universe to come up with the best possible way to produce 10 paperclips?
You assume that it knows that it is supposed to use all of science and the universe to self-improve when it would very likely just self-improve to the extent that it is told and don’t care to go any further.
You don’t need to tell it to self improve at all.
I just don’t see why you think that any artificial general intelligence would automatically assume that it would have to understand the whole universe to come up with the best possible way to produce 10 paperclips?
Per day. Risk mitigation. Security concerns. Possibility of interuption of resource supply due to finance, politics or the collapse of civilisation. Limited lifespan of the sun (primary energy source). Amount of iron in planet.
Given that particular specification if the AI didn’t take a level in baddass it would appear to be malfunctioning.
I just saw this comment by Ben Goertzel regarding self-improvement. I’d love if someone here explained why he as AGI researcher gets this so wrong?
Look—what will prevent the first human-level AGIs from self-modifying in a way that will massively increase their intelligence is a very simple thing: they won’t be smart enough to do that!
Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence—are people associated with SIAI.
But I have never heard any remotely convincing arguments in favor of this odd, outlier view !!!
BTW the term “self-modifying” is often abused in the SIAI community. Nearly all learning involves some form of self-modification. Distinguishing learning from self-modification in a rigorous formal way is pretty tricky.
Goertzel is generalizing from the human example of intelligence, which is probably the most pernicious and widespread failure mode in thinking about AI.
Or he may be completely disconnected from anything even resembling the real world. I literally have trouble believing that a professional AI researcher could describe a primitive, dumber-than-human AGI as “toddler-level” in the same sentence he dismisses it as a self-modification threat.
Toddlers self-modify into people using brains made out of meat!
Toddlers self-modify into people using brains made out of meat!
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.
Well, bad analogy. They don’t self-modify by understanding their source code and improving it. They gradually grow larger brains in a pre-set fashion while learning specific tasks. Humans have very little ability to self-modify.
I just saw this comment by Ben Goertzel regarding self-improvement. I’d love if someone here explained why he as AGI researcher gets this so wrong?
Political incentive determines the bottom line. Then the page is filled with rhetoric (and, from the looks of it, loaded language and status posturing.)
Seriously, Ben is trying to accuse people of abusing the self-modification term based on the (trivially true) observation that there is a blurry boundary between learning and self-modification?
It’s a good thing Ben is mostly harmless. I particularly liked the part where I asked Eliezer:
“How much of this harmlessness is perceived impotence and how much is it an approximately sane way of thinking?”
… and actually got a candid reply.
It is interesting to note the effort Ben is going to here to dissaffiliate himself with the SIAI and portray them as ‘out group’. Wei was querying (see earlier link) the wisdom of having Ben as Director of Research just earlier this year.
An educated outsider will very likely side with the expert though. Just like with the hype around the LHC and its dangers, academics and educated people largely believed the physicists working on it and not the fringe group that claimed it will destroy the world. Although that might be vice versa with the general public. Of course you cannot draw any conclusions about who’s right from this, but it should be investigated anyway because what all parties have in common is the need for support and money.
There are two different groups to be convinced here by each party. One group includes the educated people (academics) and mediocre rationalists and the other group is the general public.
When it comes to who’s right, the people one should listen to are the educated experts who are listening to both parties, their position and arguments. Although their intelligence and status as rationalists will be disputed as each party will claim that they are not smart enough to see the truth if they disagree with them.
(My shorter answer, by the way—I interpret all such behaviors through a Hansonian lens. This includes “near vs far”, observations about the incentives of researchers, the general theme of “X is not about Y” and homo hypocritus. Rather cynical, some may suggest, but this kind of thinking gives very good explanaions for “Why?”s that would otherwise be confusing.)
The basic idea is to make a machine that is satisfied relatively easily. So, for example, you tell it to build the ten paperclips with 10 kj total—and tell it not to worry too much if it doesn’t make them—it is not that important.
And I was arguing that any given AI won’t be able to self-improve without an exact specification of its output against which it can judge its own efficiency. That’s why I don’t see how it would be likely to be able to implement such exact specifications but yet fail to limit its scope of space, time and resources. What makes it even more unlikely in my opinion is that an AI won’t care to output anything as long as it isn’t explicitly told to do so. Where would that incentive come from?
You assume that it knows that it is supposed to use all of science and the universe to self-improve when it would very likely just self-improve to the extent that it is told and don’t care to go any further. That is for example software-optimization. I just don’t see why you think that any artificial general intelligence would automatically assume that it would have to understand the whole universe to come up with the best possible way to produce 10 paperclips?
You don’t need to tell it to self improve at all.
Per day. Risk mitigation. Security concerns. Possibility of interuption of resource supply due to finance, politics or the collapse of civilisation. Limited lifespan of the sun (primary energy source). Amount of iron in planet.
Given that particular specification if the AI didn’t take a level in baddass it would appear to be malfunctioning.
I just saw this comment by Ben Goertzel regarding self-improvement. I’d love if someone here explained why he as AGI researcher gets this so wrong?
Goertzel is generalizing from the human example of intelligence, which is probably the most pernicious and widespread failure mode in thinking about AI.
Or he may be completely disconnected from anything even resembling the real world. I literally have trouble believing that a professional AI researcher could describe a primitive, dumber-than-human AGI as “toddler-level” in the same sentence he dismisses it as a self-modification threat.
Toddlers self-modify into people using brains made out of meat!
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.
Well, bad analogy. They don’t self-modify by understanding their source code and improving it. They gradually grow larger brains in a pre-set fashion while learning specific tasks. Humans have very little ability to self-modify.
Exactly! Humans can go from toddler to AGI start-up founder, and that’s trivial.
Whatever the hell the AGI equivalent of a toddler is, it’s all but guaranteed to be better at self-modification than the human model.
Political incentive determines the bottom line. Then the page is filled with rhetoric (and, from the looks of it, loaded language and status posturing.)
Seriously, Ben is trying to accuse people of abusing the self-modification term based on the (trivially true) observation that there is a blurry boundary between learning and self-modification?
It’s a good thing Ben is mostly harmless. I particularly liked the part where I asked Eliezer:
… and actually got a candid reply.
It is interesting to note the effort Ben is going to here to dissaffiliate himself with the SIAI and portray them as ‘out group’. Wei was querying (see earlier link) the wisdom of having Ben as Director of Research just earlier this year.
An educated outsider will very likely side with the expert though. Just like with the hype around the LHC and its dangers, academics and educated people largely believed the physicists working on it and not the fringe group that claimed it will destroy the world. Although that might be vice versa with the general public. Of course you cannot draw any conclusions about who’s right from this, but it should be investigated anyway because what all parties have in common is the need for support and money.
There are two different groups to be convinced here by each party. One group includes the educated people (academics) and mediocre rationalists and the other group is the general public.
When it comes to who’s right, the people one should listen to are the educated experts who are listening to both parties, their position and arguments. Although their intelligence and status as rationalists will be disputed as each party will claim that they are not smart enough to see the truth if they disagree with them.
Well said and truly spoken.
(My shorter answer, by the way—I interpret all such behaviors through a Hansonian lens. This includes “near vs far”, observations about the incentives of researchers, the general theme of “X is not about Y” and homo hypocritus. Rather cynical, some may suggest, but this kind of thinking gives very good explanaions for “Why?”s that would otherwise be confusing.)
The basic idea is to make a machine that is satisfied relatively easily. So, for example, you tell it to build the ten paperclips with 10 kj total—and tell it not to worry too much if it doesn’t make them—it is not that important.
Sorry, I don’t understand your comment at all. I’ll be back tomorrow.