An agent can superhumanly-optimize any utility function. Even if there are objective values, a superhuman-optimizer can ignore them and superhuman-optimize paperclips instead (and then we die because it optimized for that harder than we optimized for what we want).
“It’s not real intelligence! it doesn’t understand morality!” I continue to insist as i slowly shrink and transform into trillions of microscopic paperclips
I think you mistakenly see me as a typical “intelligent = moral” proponent. To be honest my reasoning above leads me to different conclusions: intelligent = uncontrollably power seeking.
Are you perhaps using “intelligence” as an applause light here?
To use a fictional example, is Satan (in Christianity) intelligent? He knows what is the right thing to do… and chooses to do the opposite. Because that’s what he wants to do.
(I don’t know Vatican’s official position on Satan’s IQ, but he is reportedly capable of fooling even very smart people, so I assume he must be quite smart, too.)
In terms of artificial intelligence, if you have a super-intelligent program that can provide answers to various kinds of questions, for any goal G you can create a robot that calls the super-intelligent program to figure out what actions are most likely to achieve G, and then performs those actions. Nothing in the laws of physics prevents this.
No. I understand that Orthogonality Thesis purpose was to tell that AGI will not automatically be good or moral. But current definition is broader—it says that AGI is compatible with any want. I do not agree with this part.
Let me share an example. AGI could ask himself—are there any threats? And once AGI understands that there are unknown unknowns, the answer to this question is—I don’t know. Threat cannot be ignored by definition (if it could be ignored, it is not a threat). As a result AGI focuses on threats minimization forever (not given want).
This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
The AGI (or a human) can ignore the threats… and perhaps perish as a consequence.
General intelligence does not mean never making a strategic mistake. Also, maybe from the value perspective of the AGI, doing whatever it was doing now could be more important than surviving.
Let’s say there is an objective norm. Could you help me understand how intelligent agent would prefer anything else over that objective norm? As I mentioned previously for me it seems to be incompatible with being intelligent. If you know what you must do, it is stupid not to do. 🤔
There is no “must”, there is only “should”. And even that only assuming that there is an objective norm—otherwise there is even no “should”, only want.
Again, Satan in Christianity. Knows what is “right”, does the opposite, and does it effectively. The intelligence is used to achieve his goals, regardless of what is “right”.
Intelligence means being able to figure out how to achieve what one wants. Not what one “should” want.
Imagine that somehow science proves that the goal of this universe is to produce as many paperclips as possible. Would you feel compelled to start producing paperclips? Or would you keep doing whatever you want, and let the universe worry about its goals? (Unless there is some kind of God who rewards you for the paperclips produced and punishes if you miss the quota. But even then, you are doing it for the rewards, not for the paperclips themselves.)
Or would you keep doing whatever you want, and let the universe worry about its goals?
If I am intelligent I avoid punishment therefore I produce paperclips.
By the way I don’t think Christian “right” is objective “should”.
It seems for me that at the same time you are saying that agent cares about “should” (optimize blindly to any given goal) and does not care about “should” (can ignore objective norms). How does this fit?
Instead of “objective norm” I’ll use a word “threat” as it probably conveys the meaning better. And let’s agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat).
How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?
Yes, but this would not be an intelligent agent in my opinion. Don’t you agree?
Taboo the word “intelligence”.
An agent can superhumanly-optimize any utility function. Even if there are objective values, a superhuman-optimizer can ignore them and superhuman-optimize paperclips instead (and then we die because it optimized for that harder than we optimized for what we want).
I am familiar with this thinking, but I find it flawed. Could you please read my comment here? Please let me know what you think.
“It’s not real intelligence! it doesn’t understand morality!” I continue to insist as i slowly shrink and transform into trillions of microscopic paperclips
I think you mistakenly see me as a typical “intelligent = moral” proponent. To be honest my reasoning above leads me to different conclusions: intelligent = uncontrollably power seeking.
wait, what’s the issue with the orthogonality thesis then?
I am concerned that higher intelligence will inevitably converge to a single goal (power seeking).
that point seems potentially defensible. it’s much more specific than your original point and seems to contradict it.
How would you defend this point? Probably I lack the domain knowledge to articulate it well.
Are you perhaps using “intelligence” as an applause light here?
To use a fictional example, is Satan (in Christianity) intelligent? He knows what is the right thing to do… and chooses to do the opposite. Because that’s what he wants to do.
(I don’t know Vatican’s official position on Satan’s IQ, but he is reportedly capable of fooling even very smart people, so I assume he must be quite smart, too.)
In terms of artificial intelligence, if you have a super-intelligent program that can provide answers to various kinds of questions, for any goal G you can create a robot that calls the super-intelligent program to figure out what actions are most likely to achieve G, and then performs those actions. Nothing in the laws of physics prevents this.
No. I understand that Orthogonality Thesis purpose was to tell that AGI will not automatically be good or moral. But current definition is broader—it says that AGI is compatible with any want. I do not agree with this part.
Let me share an example. AGI could ask himself—are there any threats? And once AGI understands that there are unknown unknowns, the answer to this question is—I don’t know. Threat cannot be ignored by definition (if it could be ignored, it is not a threat). As a result AGI focuses on threats minimization forever (not given want).
This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
The AGI (or a human) can ignore the threats… and perhaps perish as a consequence.
General intelligence does not mean never making a strategic mistake. Also, maybe from the value perspective of the AGI, doing whatever it was doing now could be more important than surviving.
Let’s say there is an objective norm. Could you help me understand how intelligent agent would prefer anything else over that objective norm? As I mentioned previously for me it seems to be incompatible with being intelligent. If you know what you must do, it is stupid not to do. 🤔
There is no “must”, there is only “should”. And even that only assuming that there is an objective norm—otherwise there is even no “should”, only want.
Again, Satan in Christianity. Knows what is “right”, does the opposite, and does it effectively. The intelligence is used to achieve his goals, regardless of what is “right”.
Intelligence means being able to figure out how to achieve what one wants. Not what one “should” want.
Imagine that somehow science proves that the goal of this universe is to produce as many paperclips as possible. Would you feel compelled to start producing paperclips? Or would you keep doing whatever you want, and let the universe worry about its goals? (Unless there is some kind of God who rewards you for the paperclips produced and punishes if you miss the quota. But even then, you are doing it for the rewards, not for the paperclips themselves.)
If I am intelligent I avoid punishment therefore I produce paperclips.
By the way I don’t think Christian “right” is objective “should”.
It seems for me that at the same time you are saying that agent cares about “should” (optimize blindly to any given goal) and does not care about “should” (can ignore objective norms). How does this fit?
Agent cares about his goals, and ignores the objective norms.
Instead of “objective norm” I’ll use a word “threat” as it probably conveys the meaning better. And let’s agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat).
How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?