It would be better to build such machines based on a theory that typically results in localized screw-ups… rather than a theory that destroys the world by default, unless you tell it everything about you.
Where’s the “I super-agree” button?
I agree with you that maximizing utility is dangerous and wrong even just in ordinary humans. That’s not what we’re for and that’s not what the good life is about.
We don’t need a clean-cut, provable decision theory that will drive the universe into a hole of ‘utility’. We need more of a wibbly-wobbly, humany-ethicy ball of… stuff.
Where’s the “I super-agree” button?
I agree with you that maximizing utility is dangerous and wrong even just in ordinary humans. That’s not what we’re for and that’s not what the good life is about.
We don’t need a clean-cut, provable decision theory that will drive the universe into a hole of ‘utility’. We need more of a wibbly-wobbly, humany-ethicy ball of… stuff.