I have read the sequences (well, most of it). I can’t find this as a standard proposal.
I think that I haven’t made clear what I wanted to say so you just defaulted to “he has no idea what he is talking about” (which is reasonable).
What I meant to say is that rather than defining the “optimal goal” of the AI based on what we can come up with ourselves, the problem can be delegated to the AI itself as a psychological problem.
I assume that an AI would possess some knowledge of human psychology, as that would be necessary for pretty much every practical application, like talking to it.
What then prevents us from telling the AI the following:
“We humans would like to become immortal and live in utopia (or however you want to phrase it. If the AI is smart it will understand what you really mean through psychology). We disagree on the specifics and are afraid that something may go wrong. There are many contingencies to consider. Here is a list of contingencies we have come up with. Do you understand what we are trying to do? As you are much smarter than us, can you find anything that we have overlooked but that you expect us to agree with you on, once you point it out to us? Different humans have different opinions. This factors into this problem, too. Can you propose a general solution to this problem that remains flexible in the face of an unpredictable future (transhumas may have different ethics)?”
In essence, it all boils down to asking the AI:
“if you were in our position, if you had our human goals and drives, how would you define your (the AI’s) goals?”
If you have an agent that is vastly more intelligent than you are and that understands how your human mind works, couldn’t you just delegate the task of finding a good goal for it to the AI itself, just like you can give it any other kind of task?
In a sense, the Friendly AI problem is about delegating the definition of Friendliness to a superintelligence. The main issue is that it’s easy to underestimate (on account of the Mind Projection Fallacy) how large a kernel of the correct answer it needs to start off with, in order for that delegation to work properly. There’s rather a lot that goes into this, and unfortunately it’s scattered over many posts that aren’t collected in one sequence, but you can find much of it linked from Fake Fake Utility Functions (sic, and not a typo) and Value is Fragile.
In essence, it all boils down to asking the AI: “if you were in our position, if you had our human goals and drives, how would you define your (the AI’s) goals?”
That’s extrapolated volition.
And it requires telling the AI “Implement good. Human brains contain evidence for good, but don’t define it; don’t modify human drives, that won’t change good.”. It requires telling it “Prove you don’t get goal drift when you self-modify.”. It requires giving it an explicit goal system for its infancy, telling it that it’s allowed to use transistors despite the differences in temperature and gravity and electricity consumption that causes, but not to turn the galaxy into computronium—and writing the general rules for that, not the superficial cases I gave—and telling it how to progressively overwrite these goals with its true ones.
“Oracle AI” is a reasonable idea. Writing object-level goals into the AI would be bloody stupid, so we are going to do some derivation, and Oracle isn’t much further than CEV. Bostrom defends it. But seriously, “don’t influence reality beyond answering questions”?
No, none of this needs to be explicitly taught to it, that’s what I’m trying to say.
The AI understands psychology, so just point it at the internet and tell it to inform itself. It might even read through this very comment of yours, think that these topics might be important for its task and decide to read about them, all on its own.
By ordering it to imagine what it would do in your position you implicitly order it to inform itself of all these things so that it can judge well.
If it fails to do so, the humans conversing with the AI will be able to point out a lot of things in the AI’s suggestion that they wouldn’t be comfortable with. This in turn will tell the AI that it should better inform itself of all these topics and consider them so that the humans will be more content with its next suggestion.
You’re assuming the friendliness problem has been solved. An evil AI could see the question as a perfect opportunity to hand down a solution than could spell our doom.
Intentions don’t develop on their own. “Evil” intentions could only arise from misinterpreting existing goals.
While you are asking it to come up with a solution, you have its goal set to what I said in the original post:
“the temporary goal to always answer questions thruthfully as far as possible while admitting uncertainty”
Where would the evil intentions come from? At the moment you are asking the question, the only thing on the AI’s mind is how it can answer truthfully.
The only loophole I can see is that it might realize it can reduce its own workload by killing everyone who is asking it questions, but that would be countered by the secondary goal “don’t influence reality beyond answering questions”.
Unless the programmers are unable to give the AI this extremely simple goal to just always speak the truth (as far as it knows), the AI won’t have any hidden intentions.
And if the programmers working on the AI really are unable to implement this relatively simple goal, there is no hope that they would ever be able to implement the much more complex “optimal goal” they are trying to find out, anyway.
Intentions don’t develop on their own. “Evil” intentions could only arise from misinterpreting existing goals.
While you are asking it to come up with a solution, you have its goal set to what I said in the original post:
Have you? Are you talking about a human level AI. Asking or commanding a human to do something doesn’t
set that as their one an onyl goal. A human reacts according to their existing goals:they might complyhl, refuse or subvert the command.
“the temporary goal to always answer questions thruthfully as far as possible while admitting uncertainty”
Why would it be easier to code in “be truthful” than “be friendly”?
that would have to be a really sophisticated bug to misinterpret “always answer questions thruthfully as far as possible while admitting uncertainty” as “kill all humans”. I’d imagine that something as drastic as that could be found and corrected long before that. Consider that you have its goal set to this. It knows no other motivation but to respond thruthfully. It doesn’t care about the survival of humanity, or itself or about how reality really is. All it cares for is to answer the questions to the best of its abilities.
I don’t think that this goal would be all too hard to define either, as “the truth” is a pretty simple concept. As long it deals with uncertainty in the right way (by admitting it), how could this be misinterpreted?
Friendliness is far harder to define because we don’t even know a definition for it ourselves. There are far too many things to consider when defining “friendliness”.
Trivial Failure Case: The AI turns the universe into hardware to support really big computations, so it can be really sure it’s got the right answer, and also callibrate itself really well on the uncertainty.
I don’t think that this goal would be all too hard to define either, as “the truth” is a pretty simple concept.
Legions of philosophers would disagree with you
that would have to be a really sophisticated bug to misinterpret “always answer questions thruthfully as far as possible while admitting uncertainty” as “kill all humans”.
Maybe “Humans should die” is the truth. Maybe humans are bad for the planet. One of the problems with FAI is that you don’t want to give it objective morality because of that risk. You want it to side with humans. Hence “friendly” AI rather than “righteous AI”.
They just bicker endlessly about uncertainty. “can you really know that 1+1=2?”. No, but it can be used as valid until proven otherwise (which will never happen). As I said, the AI would need to understand the idea of uncertainty.
Maybe “Humans should die” is the truth. Maybe humans are bad for the planet. One of the problems >with FAI is that you don’t want to give it objective morality because of that risk. You want it to side with >humans. Hence “friendly” AI rather than “righteous AI”.
there is no such thing as objective morality. Good and evil are subjective ideas, nothing more. Firstly, unless someone explicitly tells the AI that it is a fundamental truth that nature is important to preserve, this can not happen. Secondly, the AI would also have to be incredibly gullible to just swallow such a claim. Thirdly, even if the AI does believe that, it will plainly say so to the people it is conversing with, in accordance with its goal to always tell the truth, thus warning us of this bug.
They just bicker endlessly about uncertainty. “can you really know that 1+1=2?”.
I agree with you that I don’t think a AGI would have the same problems humans have with the concept of truth. However, what you described is neither the issues philosophers raise nor the sorts of big-universe issues the AI might get stuck on.
But wouldn’t that actually support my approach? Assuming that there really is something important that all of humanity misses but the AI understands:
-If you hardcode the AI’s optimal goal based on human deliberations you are guaranteed to miss this important thing.
-If you use the method I suggested, the AI will, driven by the desire to speak the truth, try to explain the problem to the humans who will in turn tell the AI what they think of that.
[Philosophers] just bicker endlessly about uncertainty. “can you really know that 1+1=2?”.
I don’t think that is a good characterisation of the debate. It isn’t just about uncertainty.
there is no such thing as objective morality. Good and evil are subjective ideas, nothing more.
That’s what you think. Some smart humans disagree with you. A supermsart AI might disagree with you and might
be right. How can you second guess it? You cannot predict the behaviour of a supersmart AI on the basis that i t will
agree with you, who are less smart.
Firstly, unless someone explicitly tells the AI that it is a fundamental truth that nature is important to preserve, this can not happen.
Unless it figures it out.
Secondly, the AI would also have to be incredibly gullible to just swallow such a claim.
Why would that require more gullibility than “species X is more important than all the others”? That doesn’t
even looklike a moral claim.
Thirdly, even if the AI does believe that, it will plainly say so to the people it is conversing with, in accordance with its goal to always tell the truth, thus warning us of this bug.
If it has “swallowedthat* claim. You are assuming that the AI has a free choice about some goals and is just programmed with others.
If it has “swallowed* that claim. You are assuming that the AI has a free choice about some goals >and is just programmed with others.
This is the important part.
the “optimal goal” is not actually controlling the AI.
the “optimal goal” is merely the subject of a discussion.
what is controlling the AI is the desire the tell the truth to the humans it is talking to, nothing more.
Why would that require more gullibility than “species X is more important than all the others”? >That doesn’t even look like a moral claim.
The entire discussion is not supposed to unearth some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
The AI is supposed to take the human POV and think “if I were these humans, what would I want the AI’s goal to be”.
I didn’t mention this explicitly because I didn’t think it was necessary but the “optimal goal” is purely subjective from the POV of humanity and the AI is aware of this.
some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
But if that is true, the AI will say so. What’s more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.
I have read the sequences (well, most of it). I can’t find this as a standard proposal.
I think that I haven’t made clear what I wanted to say so you just defaulted to “he has no idea what he is talking about” (which is reasonable).
What I meant to say is that rather than defining the “optimal goal” of the AI based on what we can come up with ourselves, the problem can be delegated to the AI itself as a psychological problem.
I assume that an AI would possess some knowledge of human psychology, as that would be necessary for pretty much every practical application, like talking to it.
What then prevents us from telling the AI the following:
“We humans would like to become immortal and live in utopia (or however you want to phrase it. If the AI is smart it will understand what you really mean through psychology). We disagree on the specifics and are afraid that something may go wrong. There are many contingencies to consider. Here is a list of contingencies we have come up with. Do you understand what we are trying to do? As you are much smarter than us, can you find anything that we have overlooked but that you expect us to agree with you on, once you point it out to us? Different humans have different opinions. This factors into this problem, too. Can you propose a general solution to this problem that remains flexible in the face of an unpredictable future (transhumas may have different ethics)?”
In essence, it all boils down to asking the AI:
“if you were in our position, if you had our human goals and drives, how would you define your (the AI’s) goals?”
If you have an agent that is vastly more intelligent than you are and that understands how your human mind works, couldn’t you just delegate the task of finding a good goal for it to the AI itself, just like you can give it any other kind of task?
Welcome to Less Wrong!
In a sense, the Friendly AI problem is about delegating the definition of Friendliness to a superintelligence. The main issue is that it’s easy to underestimate (on account of the Mind Projection Fallacy) how large a kernel of the correct answer it needs to start off with, in order for that delegation to work properly. There’s rather a lot that goes into this, and unfortunately it’s scattered over many posts that aren’t collected in one sequence, but you can find much of it linked from Fake Fake Utility Functions (sic, and not a typo) and Value is Fragile.
That’s extrapolated volition.
And it requires telling the AI “Implement good. Human brains contain evidence for good, but don’t define it; don’t modify human drives, that won’t change good.”. It requires telling it “Prove you don’t get goal drift when you self-modify.”. It requires giving it an explicit goal system for its infancy, telling it that it’s allowed to use transistors despite the differences in temperature and gravity and electricity consumption that causes, but not to turn the galaxy into computronium—and writing the general rules for that, not the superficial cases I gave—and telling it how to progressively overwrite these goals with its true ones.
“Oracle AI” is a reasonable idea. Writing object-level goals into the AI would be bloody stupid, so we are going to do some derivation, and Oracle isn’t much further than CEV. Bostrom defends it. But seriously, “don’t influence reality beyond answering questions”?
No, none of this needs to be explicitly taught to it, that’s what I’m trying to say.
The AI understands psychology, so just point it at the internet and tell it to inform itself. It might even read through this very comment of yours, think that these topics might be important for its task and decide to read about them, all on its own.
By ordering it to imagine what it would do in your position you implicitly order it to inform itself of all these things so that it can judge well.
If it fails to do so, the humans conversing with the AI will be able to point out a lot of things in the AI’s suggestion that they wouldn’t be comfortable with. This in turn will tell the AI that it should better inform itself of all these topics and consider them so that the humans will be more content with its next suggestion.
You’re assuming the friendliness problem has been solved. An evil AI could see the question as a perfect opportunity to hand down a solution than could spell our doom.
Why would the AI be evil?
Intentions don’t develop on their own. “Evil” intentions could only arise from misinterpreting existing goals.
While you are asking it to come up with a solution, you have its goal set to what I said in the original post:
“the temporary goal to always answer questions thruthfully as far as possible while admitting uncertainty”
Where would the evil intentions come from? At the moment you are asking the question, the only thing on the AI’s mind is how it can answer truthfully.
The only loophole I can see is that it might realize it can reduce its own workload by killing everyone who is asking it questions, but that would be countered by the secondary goal “don’t influence reality beyond answering questions”.
Unless the programmers are unable to give the AI this extremely simple goal to just always speak the truth (as far as it knows), the AI won’t have any hidden intentions.
And if the programmers working on the AI really are unable to implement this relatively simple goal, there is no hope that they would ever be able to implement the much more complex “optimal goal” they are trying to find out, anyway.
Bugs, maybe
Intentions don’t develop on their own. “Evil” intentions could only arise from misinterpreting existing goals.
Have you? Are you talking about a human level AI. Asking or commanding a human to do something doesn’t set that as their one an onyl goal. A human reacts according to their existing goals:they might complyhl, refuse or subvert the command.
Why would it be easier to code in “be truthful” than “be friendly”?
that would have to be a really sophisticated bug to misinterpret “always answer questions thruthfully as far as possible while admitting uncertainty” as “kill all humans”. I’d imagine that something as drastic as that could be found and corrected long before that. Consider that you have its goal set to this. It knows no other motivation but to respond thruthfully. It doesn’t care about the survival of humanity, or itself or about how reality really is. All it cares for is to answer the questions to the best of its abilities.
I don’t think that this goal would be all too hard to define either, as “the truth” is a pretty simple concept. As long it deals with uncertainty in the right way (by admitting it), how could this be misinterpreted? Friendliness is far harder to define because we don’t even know a definition for it ourselves. There are far too many things to consider when defining “friendliness”.
Trivial Failure Case: The AI turns the universe into hardware to support really big computations, so it can be really sure it’s got the right answer, and also callibrate itself really well on the uncertainty.
Legions of philosophers would disagree with you
Maybe “Humans should die” is the truth. Maybe humans are bad for the planet. One of the problems with FAI is that you don’t want to give it objective morality because of that risk. You want it to side with humans. Hence “friendly” AI rather than “righteous AI”.
They just bicker endlessly about uncertainty. “can you really know that 1+1=2?”. No, but it can be used as valid until proven otherwise (which will never happen). As I said, the AI would need to understand the idea of uncertainty.
there is no such thing as objective morality. Good and evil are subjective ideas, nothing more. Firstly, unless someone explicitly tells the AI that it is a fundamental truth that nature is important to preserve, this can not happen. Secondly, the AI would also have to be incredibly gullible to just swallow such a claim. Thirdly, even if the AI does believe that, it will plainly say so to the people it is conversing with, in accordance with its goal to always tell the truth, thus warning us of this bug.
I agree with you that I don’t think a AGI would have the same problems humans have with the concept of truth. However, what you described is neither the issues philosophers raise nor the sorts of big-universe issues the AI might get stuck on.
But wouldn’t that actually support my approach? Assuming that there really is something important that all of humanity misses but the AI understands:
-If you hardcode the AI’s optimal goal based on human deliberations you are guaranteed to miss this important thing.
-If you use the method I suggested, the AI will, driven by the desire to speak the truth, try to explain the problem to the humans who will in turn tell the AI what they think of that.
I don’t see how that’s relivant to philosophical questions about truth. Did you mean to reply to my other comment?
I don’t think that is a good characterisation of the debate. It isn’t just about uncertainty.
That’s what you think. Some smart humans disagree with you. A supermsart AI might disagree with you and might be right. How can you second guess it? You cannot predict the behaviour of a supersmart AI on the basis that i t will agree with you, who are less smart.
Unless it figures it out.
Why would that require more gullibility than “species X is more important than all the others”? That doesn’t even look like a moral claim.
If it has “swallowed that* claim. You are assuming that the AI has a free choice about some goals and is just programmed with others.
This is the important part.
the “optimal goal” is not actually controlling the AI.
the “optimal goal” is merely the subject of a discussion.
what is controlling the AI is the desire the tell the truth to the humans it is talking to, nothing more.
The entire discussion is not supposed to unearth some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
The AI is supposed to take the human POV and think “if I were these humans, what would I want the AI’s goal to be”.
I didn’t mention this explicitly because I didn’t think it was necessary but the “optimal goal” is purely subjective from the POV of humanity and the AI is aware of this.
But if that is true, the AI will say so. What’s more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.