Tim Tyler: Wiring almost anything about today’s humans into a proposed long-term utility function seems rather obviously dangerous.
Gee, who’s articulating this judgment? A fish? A leaf of grass? A rock? Why, no, it’s a human!
Caledonian: I for one do not see the benefit in getting superintelligences to follow the preferences of lesser, evolved intelligences. There’s no particular reason to believe that humans with access to the power granted by a superintelligence would make better choices than the superintelligence at all—or that their preferences would lead to states that their preferences would approve of, much less actually benefit them.
Who is it that’s using these words “benefit”, talking about “lesser” intelligences, invoking this mysterious property of “better”-ness? Is it a star, a mountain, an atom? Why no, it’s a human!
...I’m seriously starting to wonder if some people just lack the reflective gear required to abstract over their background frameworks. All this talk of moral “danger” and things “better” than us, is the execution of a computation embodied in humans, nowhere else, and if you want an AI that follows the thought and cares about it, it will have to mirror humans in that sense.
Gee, who’s articulating this judgment? A fish? A leaf of grass? A rock? Why, no, it’s a human!
Who is it that’s using these words “benefit”, talking about “lesser” intelligences, invoking this mysterious property of “better”-ness? Is it a star, a mountain, an atom? Why no, it’s a human!
...I’m seriously starting to wonder if some people just lack the reflective gear required to abstract over their background frameworks. All this talk of moral “danger” and things “better” than us, is the execution of a computation embodied in humans, nowhere else, and if you want an AI that follows the thought and cares about it, it will have to mirror humans in that sense.