Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a “face recognition” module when it determines that this would be useful for interacting with humans?
The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks).
It seems that you should be able to write a simple program that overwrites its own code with an arbitrary value. Wouldn’t that be a counterexample?
Would you consider such a machine an artificial intelligent agent? Isn’t it just a glorified printing press?
I’m not saying that some configurations of memory are physically impossible. I’m saying that intelligent agency entails typicality, and therefore, for any intelligent agent, there are some things it is extremely unlikely to do, to the point of practical impossibility.
Certainly that doesn’t count as an intelligent agent—but a GAI with that as its only goal, for example, why would that be impossible? An AI doesn’t need to value survival.
I’d be interested in the conclusions derived about “typical” intelligences and the “forbidden actions”, but I don’t see how you have derived them.
Do we agree, then, that humans and artificial agents are both subject to laws forbidding logical contradictions and the like, but that artificial agents are not in principle necessarily bound by the same additional restrictions as humans?
I would actually argue the opposite.
Are you familiar with the claim that people are getting less intelligent since modern technology allows less intelligent people and their children to survive? (I never saw this claim discussed seriously, so I don’t know how factual it is; but the logic of it is what I’m getting at.) The idea is that people today are less constrained in their required intelligence, and therefore the typical human is becoming less intelligent.
Other claims are that activities such as browsing the internet and video gaming are changing the set of mental skills which humans are good at. We improve in tasks which we need to be good at, and give up skills which are less useful. You gave yet another example in your comment regarding face recognition.
The elasticity of biological agents is (quantitatively) limited, and improvement by evolution takes time. This is where artificial agents step in. They can be better than humans, but the typical agent will only actually be better if it has to. Generally, more intelligent agents are those which are forced to comply to tighter constraints, not looser ones.
I think we have our quantifiers mixed up? I’m saying an AI is not in principle bound by these restrictions—that is, it’s not true that all AIs must necessarily have the same restrictions on their behavior as a human. This seems fairly uncontroversial to me. I suppose the disconnect, then, is that you expect a GAI will be of a type bound by these same restrictions. But then I thought the restrictions you were talking about were “laws forbidding logical contradictions and the like”? I’m a little confused—could you clarify your position, please?
a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn’t need to value survival.
A GAI with the utility of burning itself? I don’t think that’s viable, no.
I’d be interested in the conclusions derived about “typical” intelligences and the “forbidden actions”, but I don’t see how you have derived them.
At the moment it’s little more than professional intuition. We also lack some necessary shared terminology. Let’s leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.
could you clarify your position, please?
I think I’m starting to see the disconnect, and we probably don’t really disagree.
You said:
This sounds unjustifiably broad
My thinking is very broad but, from my perspective, not unjustifiably so. In my research I’m looking for mathematical formulations of intelligence in any form—biological or mechanical.
Taking a narrower viewpoint, humans “in their current form” are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.
Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?
a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn’t need to value survival.
A GAI with the utility of burning itself? I don’t think that’s viable, no.
What do you mean by “viable”? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?
As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said “why would an AI try to maximize its original utility function, instead of switching to a different / easier function?”, to which I responded “why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it’s easy)”.
But anyway it can’t be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.
I’d be interested in the conclusions derived about “typical” intelligences and the “forbidden actions”, but I don’t see how you have derived them.
At the moment it’s little more than professional intuition. We also lack some necessary shared terminology. Let’s leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.
Fair enough, though for what it’s worth I have a fair background in mathematics, theoretical CS, and the like.
could you clarify your position, please?
I think I’m starting to see the disconnect, and we probably don’t really disagree.
You said:
This sounds unjustifiably broad
My thinking is very broad but, from my perspective, not unjustifiably so. In my research I’m looking for mathematical formulations of intelligence in any form—biological or mechanical.
I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.
Taking a narrower viewpoint, humans “in their current form” are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.
Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?
Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.
A GAI with the utility of burning itself? I don’t think that’s viable, no.
What do you mean by “viable”?
Intelligence is expensive. More intelligence costs more to obtain and maintain. But the sentiment around here (and this time I agree) seems to be that intelligence “scales”, i.e. that it doesn’t suffer from diminishing returns in the “middle world” like most other things; hence the singularity.
For that to be true, more intelligence also has to be more rewarding. But not just in the sense of asymptotically approaching optimality. As intelligence increases, it has to constantly find new “revenue streams” for its utility. It must not saturate its utility function, in fact its utility must be insatiable in the “middle world”. A good example is curiosity, which is probably why many biological agents are curious even when it serves no other purpose.
Suicide is not such a utility function. We can increase the degree of intelligence an agent needs to have to successfully kill itself (for example, by keeping the gun away). But in the end, it’s “all or nothing”.
But anyway it can’t be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.
Gödel’s theorem doesn’t prevent any specific thing. In this case I was referring to information-theoretic reasons. And indeed, suicide is not a typical human behavior, even without considering that some contributing factors are irrelevant for our discussion.
Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?
Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.
In that sense, I completely agree with you. I usually don’t like making the technology distinction, because I believe there’s more important stuff going on in higher levels of abstraction. But if that’s where you’re coming from then I guess we have resolved our differences :)
Certainly that doesn’t count as an intelligent agent—but a GAI with that as its only goal, for example, why would that be impossible? An AI doesn’t need to value survival.
I’d be interested in the conclusions derived about “typical” intelligences and the “forbidden actions”, but I don’t see how you have derived them.
I think we have our quantifiers mixed up? I’m saying an AI is not in principle bound by these restrictions—that is, it’s not true that all AIs must necessarily have the same restrictions on their behavior as a human. This seems fairly uncontroversial to me. I suppose the disconnect, then, is that you expect a GAI will be of a type bound by these same restrictions. But then I thought the restrictions you were talking about were “laws forbidding logical contradictions and the like”? I’m a little confused—could you clarify your position, please?
A GAI with the utility of burning itself? I don’t think that’s viable, no.
At the moment it’s little more than professional intuition. We also lack some necessary shared terminology. Let’s leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.
I think I’m starting to see the disconnect, and we probably don’t really disagree.
You said:
My thinking is very broad but, from my perspective, not unjustifiably so. In my research I’m looking for mathematical formulations of intelligence in any form—biological or mechanical.
Taking a narrower viewpoint, humans “in their current form” are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.
Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?
What do you mean by “viable”? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?
As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said “why would an AI try to maximize its original utility function, instead of switching to a different / easier function?”, to which I responded “why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it’s easy)”.
But anyway it can’t be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.
Fair enough, though for what it’s worth I have a fair background in mathematics, theoretical CS, and the like.
I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.
Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.
Intelligence is expensive. More intelligence costs more to obtain and maintain. But the sentiment around here (and this time I agree) seems to be that intelligence “scales”, i.e. that it doesn’t suffer from diminishing returns in the “middle world” like most other things; hence the singularity.
For that to be true, more intelligence also has to be more rewarding. But not just in the sense of asymptotically approaching optimality. As intelligence increases, it has to constantly find new “revenue streams” for its utility. It must not saturate its utility function, in fact its utility must be insatiable in the “middle world”. A good example is curiosity, which is probably why many biological agents are curious even when it serves no other purpose.
Suicide is not such a utility function. We can increase the degree of intelligence an agent needs to have to successfully kill itself (for example, by keeping the gun away). But in the end, it’s “all or nothing”.
Gödel’s theorem doesn’t prevent any specific thing. In this case I was referring to information-theoretic reasons. And indeed, suicide is not a typical human behavior, even without considering that some contributing factors are irrelevant for our discussion.
In that sense, I completely agree with you. I usually don’t like making the technology distinction, because I believe there’s more important stuff going on in higher levels of abstraction. But if that’s where you’re coming from then I guess we have resolved our differences :)