[Philosophers] just bicker endlessly about uncertainty. “can you really know that 1+1=2?”.
I don’t think that is a good characterisation of the debate. It isn’t just about uncertainty.
there is no such thing as objective morality. Good and evil are subjective ideas, nothing more.
That’s what you think. Some smart humans disagree with you. A supermsart AI might disagree with you and might
be right. How can you second guess it? You cannot predict the behaviour of a supersmart AI on the basis that i t will
agree with you, who are less smart.
Firstly, unless someone explicitly tells the AI that it is a fundamental truth that nature is important to preserve, this can not happen.
Unless it figures it out.
Secondly, the AI would also have to be incredibly gullible to just swallow such a claim.
Why would that require more gullibility than “species X is more important than all the others”? That doesn’t
even looklike a moral claim.
Thirdly, even if the AI does believe that, it will plainly say so to the people it is conversing with, in accordance with its goal to always tell the truth, thus warning us of this bug.
If it has “swallowedthat* claim. You are assuming that the AI has a free choice about some goals and is just programmed with others.
If it has “swallowed* that claim. You are assuming that the AI has a free choice about some goals >and is just programmed with others.
This is the important part.
the “optimal goal” is not actually controlling the AI.
the “optimal goal” is merely the subject of a discussion.
what is controlling the AI is the desire the tell the truth to the humans it is talking to, nothing more.
Why would that require more gullibility than “species X is more important than all the others”? >That doesn’t even look like a moral claim.
The entire discussion is not supposed to unearth some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
The AI is supposed to take the human POV and think “if I were these humans, what would I want the AI’s goal to be”.
I didn’t mention this explicitly because I didn’t think it was necessary but the “optimal goal” is purely subjective from the POV of humanity and the AI is aware of this.
some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
But if that is true, the AI will say so. What’s more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.
I don’t think that is a good characterisation of the debate. It isn’t just about uncertainty.
That’s what you think. Some smart humans disagree with you. A supermsart AI might disagree with you and might be right. How can you second guess it? You cannot predict the behaviour of a supersmart AI on the basis that i t will agree with you, who are less smart.
Unless it figures it out.
Why would that require more gullibility than “species X is more important than all the others”? That doesn’t even look like a moral claim.
If it has “swallowed that* claim. You are assuming that the AI has a free choice about some goals and is just programmed with others.
This is the important part.
the “optimal goal” is not actually controlling the AI.
the “optimal goal” is merely the subject of a discussion.
what is controlling the AI is the desire the tell the truth to the humans it is talking to, nothing more.
The entire discussion is not supposed to unearth some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.
The AI is supposed to take the human POV and think “if I were these humans, what would I want the AI’s goal to be”.
I didn’t mention this explicitly because I didn’t think it was necessary but the “optimal goal” is purely subjective from the POV of humanity and the AI is aware of this.
But if that is true, the AI will say so. What’s more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.