You’re right, but isn’t this a needless distraction from the more important point, i.e. that it doesn’t matter whether we humans find interesting or valueable what the (unfriendly-)AI does?
I dunno, I think this is a pretty entertaining instance of anthropomorphizing + generalizing from oneself. At least in the future, I’ll be able to say things like “for example, Goertzel—a genuine AI researcher who has produced stuff—actually thinks that an intelligent AI can’t be designed to have an all-consuming interest in something like pi, despite all the real-world humans who are obsessed with pi!”
You’re right, but isn’t this a needless distraction from the more important point, i.e. that it doesn’t matter whether we humans find interesting or valueable what the (unfriendly-)AI does?
I dunno, I think this is a pretty entertaining instance of anthropomorphizing + generalizing from oneself. At least in the future, I’ll be able to say things like “for example, Goertzel—a genuine AI researcher who has produced stuff—actually thinks that an intelligent AI can’t be designed to have an all-consuming interest in something like pi, despite all the real-world humans who are obsessed with pi!”