Can someone give a reason why wireheading would be bad?
Well, we don’t want our machines to wirehead themselves. If they do, they are less likely to be interested in doing what we tell them to—which would mean that they are then less use to us.
As designers, we have good reasons to find a way around wireheading (and somewhat less seriously and metaphorically, Azathoth has good reasons to prevent us from wireheading). So making wireheading-proof agents is important, I agree, but that doesn’t apply to ourselves.
The connection with us could be that (to the extent we can) we choose what we want as though we were machines at our disposal.
… There is a component that wants doughnuts for breakfast, but actually “I” want eggs for breakfast since I’d rather be healthy … and the machine that is me obstensibly makes eggs. The hedeonistic component of our brain that wants wire-heading is probably/apparently repressed when it comes down to conflicts with real external goals.
Well, we don’t want our machines to wirehead themselves. If they do, they are less likely to be interested in doing what we tell them to—which would mean that they are then less use to us.
Sure, but what about us?
As designers, we have good reasons to find a way around wireheading (and somewhat less seriously and metaphorically, Azathoth has good reasons to prevent us from wireheading). So making wireheading-proof agents is important, I agree, but that doesn’t apply to ourselves.
The connection with us could be that (to the extent we can) we choose what we want as though we were machines at our disposal.
… There is a component that wants doughnuts for breakfast, but actually “I” want eggs for breakfast since I’d rather be healthy … and the machine that is me obstensibly makes eggs. The hedeonistic component of our brain that wants wire-heading is probably/apparently repressed when it comes down to conflicts with real external goals.