No. I understand that Orthogonality Thesis purpose was to tell that AGI will not automatically be good or moral. But current definition is broader—it says that AGI is compatible with any want. I do not agree with this part.
Let me share an example. AGI could ask himself—are there any threats? And once AGI understands that there are unknown unknowns, the answer to this question is—I don’t know. Threat cannot be ignored by definition (if it could be ignored, it is not a threat). As a result AGI focuses on threats minimization forever (not given want).
This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
The AGI (or a human) can ignore the threats… and perhaps perish as a consequence.
General intelligence does not mean never making a strategic mistake. Also, maybe from the value perspective of the AGI, doing whatever it was doing now could be more important than surviving.
Let’s say there is an objective norm. Could you help me understand how intelligent agent would prefer anything else over that objective norm? As I mentioned previously for me it seems to be incompatible with being intelligent. If you know what you must do, it is stupid not to do. 🤔
There is no “must”, there is only “should”. And even that only assuming that there is an objective norm—otherwise there is even no “should”, only want.
Again, Satan in Christianity. Knows what is “right”, does the opposite, and does it effectively. The intelligence is used to achieve his goals, regardless of what is “right”.
Intelligence means being able to figure out how to achieve what one wants. Not what one “should” want.
Imagine that somehow science proves that the goal of this universe is to produce as many paperclips as possible. Would you feel compelled to start producing paperclips? Or would you keep doing whatever you want, and let the universe worry about its goals? (Unless there is some kind of God who rewards you for the paperclips produced and punishes if you miss the quota. But even then, you are doing it for the rewards, not for the paperclips themselves.)
Or would you keep doing whatever you want, and let the universe worry about its goals?
If I am intelligent I avoid punishment therefore I produce paperclips.
By the way I don’t think Christian “right” is objective “should”.
It seems for me that at the same time you are saying that agent cares about “should” (optimize blindly to any given goal) and does not care about “should” (can ignore objective norms). How does this fit?
Instead of “objective norm” I’ll use a word “threat” as it probably conveys the meaning better. And let’s agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat).
How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?
No. I understand that Orthogonality Thesis purpose was to tell that AGI will not automatically be good or moral. But current definition is broader—it says that AGI is compatible with any want. I do not agree with this part.
Let me share an example. AGI could ask himself—are there any threats? And once AGI understands that there are unknown unknowns, the answer to this question is—I don’t know. Threat cannot be ignored by definition (if it could be ignored, it is not a threat). As a result AGI focuses on threats minimization forever (not given want).
This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
The AGI (or a human) can ignore the threats… and perhaps perish as a consequence.
General intelligence does not mean never making a strategic mistake. Also, maybe from the value perspective of the AGI, doing whatever it was doing now could be more important than surviving.
Let’s say there is an objective norm. Could you help me understand how intelligent agent would prefer anything else over that objective norm? As I mentioned previously for me it seems to be incompatible with being intelligent. If you know what you must do, it is stupid not to do. 🤔
There is no “must”, there is only “should”. And even that only assuming that there is an objective norm—otherwise there is even no “should”, only want.
Again, Satan in Christianity. Knows what is “right”, does the opposite, and does it effectively. The intelligence is used to achieve his goals, regardless of what is “right”.
Intelligence means being able to figure out how to achieve what one wants. Not what one “should” want.
Imagine that somehow science proves that the goal of this universe is to produce as many paperclips as possible. Would you feel compelled to start producing paperclips? Or would you keep doing whatever you want, and let the universe worry about its goals? (Unless there is some kind of God who rewards you for the paperclips produced and punishes if you miss the quota. But even then, you are doing it for the rewards, not for the paperclips themselves.)
If I am intelligent I avoid punishment therefore I produce paperclips.
By the way I don’t think Christian “right” is objective “should”.
It seems for me that at the same time you are saying that agent cares about “should” (optimize blindly to any given goal) and does not care about “should” (can ignore objective norms). How does this fit?
Agent cares about his goals, and ignores the objective norms.
Instead of “objective norm” I’ll use a word “threat” as it probably conveys the meaning better. And let’s agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat).
How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?