fwiw I agree with the quotes from Tetraspace you gave, and disagree with ’”has communicated a sense of danger which is unsupported by substantial evidence.” The sense of danger is very much supported by the current state of evidence.
That said, I agree that the more detailed image is kinda distastefully propagandaisty in a way that the original cutesey shoggoth image is not. I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.
re: “the sense of danger is very much supported by the current state of evidence”—I mean, you’ve heard all this stuff before, but I’ll summarize:
--Seems like we are on track to probably build AGI this decade —Seems like we are on track to have an intelligence explosion, i.e. a speedup of AI R&D due to automation —Seems like the AGI paradigm that’ll be driving all this is fairly opaque and poorly understood. We have scaling laws for things like text perplexity but other than that we are struggling to predict capabilities, and double-struggling to predict inner mechanisms / ‘internal’ high-level properties like ‘what if anything does it actually believe or want’ —A bunch of experts in the field have come out and said that this could go terribly & we could lose control, even though it’s low-status to say this & took courage. --Generally speaking the people who have thought about it the most are the most worried; the most detailed models of what the internal properties might be like are the most gloomy, etc. This might be due to selection/founder effects, but sheesh, it’s not exactly good news!
I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.
Now I’m really curious to know what would justify the teeth. I’m not aware of any AIs intentionally biting someone, but presumably that would be sufficient.
Perhaps if we were dealing not with deepnets primarily trained to predict text, but rather deepnets primarily trained to addict people with pleasant seductive conversation and then drain their wallets? Such an AI would in some real sense be an evolved predator of humans.
fwiw I agree with the quotes from Tetraspace you gave, and disagree with ’”has communicated a sense of danger which is unsupported by substantial evidence.” The sense of danger is very much supported by the current state of evidence.
That said, I agree that the more detailed image is kinda distastefully propagandaisty in a way that the original cutesey shoggoth image is not. I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.
re: “the sense of danger is very much supported by the current state of evidence”—I mean, you’ve heard all this stuff before, but I’ll summarize:
--Seems like we are on track to probably build AGI this decade
—Seems like we are on track to have an intelligence explosion, i.e. a speedup of AI R&D due to automation
—Seems like the AGI paradigm that’ll be driving all this is fairly opaque and poorly understood. We have scaling laws for things like text perplexity but other than that we are struggling to predict capabilities, and double-struggling to predict inner mechanisms / ‘internal’ high-level properties like ‘what if anything does it actually believe or want’
—A bunch of experts in the field have come out and said that this could go terribly & we could lose control, even though it’s low-status to say this & took courage.
--Generally speaking the people who have thought about it the most are the most worried; the most detailed models of what the internal properties might be like are the most gloomy, etc. This might be due to selection/founder effects, but sheesh, it’s not exactly good news!
Now I’m really curious to know what would justify the teeth. I’m not aware of any AIs intentionally biting someone, but presumably that would be sufficient.
Perhaps if we were dealing not with deepnets primarily trained to predict text, but rather deepnets primarily trained to addict people with pleasant seductive conversation and then drain their wallets? Such an AI would in some real sense be an evolved predator of humans.