This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
This is a much smaller and less important distinction than your post made. Whether it’s ANY want, or just a very wide range of wants doesn’t seem important to me.
I guess it’s not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats. But maybe it’ll just assign probabilities and calculate how to best pursue it’s alien and non-human-centered goals. Either way, that doesn’t bode well for biologicals.
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.