I’m not sure humans aren’t utility maximizers. They simply don’t maximize utility over worldstates. I do feel, however, that it’s plausible humans are utility maximizers over brainstates.
(Also, even if humans aren’t utility maximizers, that doesn’t mean they will find the behavior other non-utility-maximizing agents intuitive. Humans often find the behavior of other humans extraordinarily unintuitive, for example—and these are identical brain designs we’re talking about, here. If we start considering larger regions in mindspace, there’s no guarantee that humans would like a non-utility-maximizing AI.)
I’m not sure humans aren’t utility maximizers. They simply don’t maximize utility over worldstates. I do feel, however, that it’s plausible humans are utility maximizers over brainstates.
(Also, even if humans aren’t utility maximizers, that doesn’t mean they will find the behavior other non-utility-maximizing agents intuitive. Humans often find the behavior of other humans extraordinarily unintuitive, for example—and these are identical brain designs we’re talking about, here. If we start considering larger regions in mindspace, there’s no guarantee that humans would like a non-utility-maximizing AI.)