But it’s an instruction humans are capable of following within the limits of their ability.
If I was building a non .AI system to do X, then I would have to specify X. But AIs are learning systems.
If you are going to admit that there is difference between theoretical possibility and practical likelihood in the OT, then ,most of the UFAI argument goes out of the window, since the Lovecraftian Horrors that so densly populate mindspace are only theoretical possibilities.
But it’s an instruction humans are capable of following within the limits of their ability.
Because they desire to do so. If for some reason the human has no desire to follow those instructions, then they will “follow” them formally but twist them beyond recognition. Same goes for AI, except that they will not default to desiring to follow them, as many humans would.
What an .AI does depends how it is built. You keep arguing that one particular architectural choice, with an arbitrary top level goal and only instrumental rationality is dangerous. But that choice is not necessary or inevitable.
(Almost) any top level goal that does not specify human safety.
only instrumental rationality
Self modifying AIs will tend to instrumental rationality according to Omohundro’s arguments.
But that choice is not necessary or inevitable.
Good. How do you propose to avoid that happening? You seem extraordinarily confident that these as-yet-undesigned machines, developed and calibrated in a training environment only, by programmers who don’t take AI risk seriously, and put potentially into positions of extreme power where I wouldn’t trust any actual human—will end up capturing almost all of human morality.
I’ve argued against both against convergent goal fidelity regarding the intended (versus the actually programmed in) goals and against objective morality at length, and multiple times. I can dig up a few comments, if you’d like. I don’t know what strawman version you’re referring to, though: the accuracy/inaccuracy of my assertion doesn’t affect the veracity of your claim.
There is no reason to suppose they will not tend to epistemic rationality, which includes instrumental rationality.
You have no evidence that .AI researchers aren’t taking .AI risk seriously enough, given what they are in fact doing. They may not be taking your arguments seriously, and that may well be because you arguments are not relevant to their research. A number of them have said as much on this site.
Even aside from the relevance issue, the MIRI argument constantly assumes that superintelligent IS will have inexplicable deficits. Superintelligent but dumb doesn’t make logical sense.
There’s an argument that an SAI will figure out the correct morality, and there’s an argument that it wont misinterpret directives. They are different arguments, and the second is much stronger.
I now see your point. I still don’t see how you plan to code a “interpret these things properly” piece of the AI. I think working through a specific design would be useful.
I also think you should work your argument into a less wrong post (and send me a message when you’ve done that, in case I miss it) as 12 or so levels deep into a comment thread is not a place most people will ever see.
They are different arguments, and the second is much stronger.
Not really. Given the first, we can instruct “only do things that [some human or human group with nice values] would approve of” and we’ve got an acceptable morality.
By “interpret these things correctly”, do you mean linguistic competence, or a goal?
A goal. If the AI becomes superintelligent, then it will develop linguistic competence as needed. But I see no way of coding it so that that competence is reflected in its motivation (and it’s not from lack of searching for ways of doing that).
So is it safe to run AIXI approximations in boxes today?
By code it, do you mean “code, train, or evolve it”?
Note that we dont know much about coding higher level goals in general.
Note that “get things right except where X is concerned” is more complex than “get things right”. Humans do the former because of bias. The less anthropic nature of an .AI might be to our advantage.
But it’s an instruction humans are capable of following within the limits of their ability.
If I was building a non .AI system to do X, then I would have to specify X. But AIs are learning systems.
If you are going to admit that there is difference between theoretical possibility and practical likelihood in the OT, then ,most of the UFAI argument goes out of the window, since the Lovecraftian Horrors that so densly populate mindspace are only theoretical possibilities.
Because they desire to do so. If for some reason the human has no desire to follow those instructions, then they will “follow” them formally but twist them beyond recognition. Same goes for AI, except that they will not default to desiring to follow them, as many humans would.
What an .AI does depends how it is built. You keep arguing that one particular architectural choice, with an arbitrary top level goal and only instrumental rationality is dangerous. But that choice is not necessary or inevitable.
(Almost) any top level goal that does not specify human safety.
Self modifying AIs will tend to instrumental rationality according to Omohundro’s arguments.
Good. How do you propose to avoid that happening? You seem extraordinarily confident that these as-yet-undesigned machines, developed and calibrated in a training environment only, by programmers who don’t take AI risk seriously, and put potentially into positions of extreme power where I wouldn’t trust any actual human—will end up capturing almost all of human morality.
That confidence, I’d surmise, often goes hand in hand with an implicit or explicit belief in objective morality.
If you don’t think people should believe in it, argue against it, and not just a strawmman version.
I’ve argued against both against convergent goal fidelity regarding the intended (versus the actually programmed in) goals and against objective morality at length, and multiple times. I can dig up a few comments, if you’d like. I don’t know what strawman version you’re referring to, though: the accuracy/inaccuracy of my assertion doesn’t affect the veracity of your claim.
The usual strawmen are The Tablet and Written into the Laws of the Universe.
There is no reason to suppose they will not tend to epistemic rationality, which includes instrumental rationality.
You have no evidence that .AI researchers aren’t taking .AI risk seriously enough, given what they are in fact doing. They may not be taking your arguments seriously, and that may well be because you arguments are not relevant to their research. A number of them have said as much on this site.
Even aside from the relevance issue, the MIRI argument constantly assumes that superintelligent IS will have inexplicable deficits. Superintelligent but dumb doesn’t make logical sense.
And you’ve redefined “anything but perfectly morally in tune with humanity” as “dumb”. I’m waiting for an argument as to why that is so.
There’s an argument that an SAI will figure out the correct morality, and there’s an argument that it wont misinterpret directives. They are different arguments, and the second is much stronger.
I now see your point. I still don’t see how you plan to code a “interpret these things properly” piece of the AI. I think working through a specific design would be useful.
I also think you should work your argument into a less wrong post (and send me a message when you’ve done that, in case I miss it) as 12 or so levels deep into a comment thread is not a place most people will ever see.
Not really. Given the first, we can instruct “only do things that [some human or human group with nice values] would approve of” and we’ve got an acceptable morality.
By “interpret these things correctly”, do you mean linguistic competence, or a goal?
The linguistic competence is aready assumed in any .AI that can talk it’s way out of a box (ie not AIXI like), without provision of a design by MIRI.
An AIXI can’t even conceptualise that it’s in a box, so it doesn’t matter if it gets its goals wrong, It can be rendered safe by boxing.
Which combination of assumptions is the problem?
I’m not so sure about that… AIXI can learn certain ways of behaving as if it were part of the universe, even with the Cartesian dualism in its code: http://lesswrong.com/lw/8rl/would_aixi_protect_itself/
A goal. If the AI becomes superintelligent, then it will develop linguistic competence as needed. But I see no way of coding it so that that competence is reflected in its motivation (and it’s not from lack of searching for ways of doing that).
So is it safe to run AIXI approximations in boxes today?
By code it, do you mean “code, train, or evolve it”?
Note that we dont know much about coding higher level goals in general.
Note that “get things right except where X is concerned” is more complex than “get things right”. Humans do the former because of bias. The less anthropic nature of an .AI might be to our advantage.
IMHO, yes. The computational complexity of AIXItl is such that it can’t be used for anything significant on modern hardware.