Are you trying to make sure a bad Singularity happens?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”.
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer “No”. (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
As I’ve asked elsewhere to resoundingly negative results:
Why is it automatically “bad” to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience… why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a ‘child’ or ‘inheritor’ of “the human will”—then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define “humanity” in so strict terms as to require that it be flesh-and-blood to be “human”. If our FOOMing AGI were a “human” one—I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me—but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is “inheritor” non-Friendly AGI “bad”?
Caveat: It is possible for me to discuss my own motivations using someone else’s valuative framework. So context matters. The mere fact that I would say “No” does not mean that I could never say “Yes—as you see it.”
It isn’t automatically bad. I just don’t want it. This is why I said your answer is legitimately “No”.
Fair enough.
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment—without our permission—would you consider this a Friendly outcome?
Potentially, depending on the simulated environment.
Assume Earth-like or video-game-like (in the latter-case including ‘respawns’).
Video game upgrades! Sounds good.
I believe you mean, “I’m here to kick ass and chew bubblegum… and I’m all outta gum!”