Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?
Just speaking of weaknesses of the paperclip maximizer though experiment. I’ve seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.
Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?
Just speaking of weaknesses of the paperclip maximizer though experiment. I’ve seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.