Your post just seems to be introducing the concept of accidentally creating a super-powerful paperclip-maximizing AI, which is an idea that we’ve all been talking about for years. I can’t tell what part is supposed to be new—is it that this AI would actually be smart and not just an idiot savant?
The ideas that AIs follow their programming, and that intelligence and values are orthogonal seem like pretty well-established concepts around here. And, in particular, a lot of our discussion about hypothetical Clippies has presupposed that they would understand humans well enough to engage in game theory scenarios with us.
I’ve had an online conversation where it was argued that AI goals other than what was intended by the programmers would be evidence of a faulty AI—and hence that it wouldn’t be a dangerous one. This post was a direct response to that.
? Never written anything like this… Have others?
Your post just seems to be introducing the concept of accidentally creating a super-powerful paperclip-maximizing AI, which is an idea that we’ve all been talking about for years. I can’t tell what part is supposed to be new—is it that this AI would actually be smart and not just an idiot savant?
The ideas that AIs follow their programming, and that intelligence and values are orthogonal seem like pretty well-established concepts around here. And, in particular, a lot of our discussion about hypothetical Clippies has presupposed that they would understand humans well enough to engage in game theory scenarios with us.
Am I missing something?
I’ve had an online conversation where it was argued that AI goals other than what was intended by the programmers would be evidence of a faulty AI—and hence that it wouldn’t be a dangerous one. This post was a direct response to that.
Ah, I see. Fair enough, I agree.
It’s vaguely reminiscent of “a computer is only as stupid as its programmer” memes.