I’m not sure that awareness is needed for paperclip maximizing. For example, one might call fire a very good CO2 maximizer. Actually, I’m not even sure you can apply the word awareness to non-human-like optimizers.
“If we reprogrammed you to count paperclips instead”
This is a conversation about changing my core utility function / goals, and what you are discussing would be far more of an architectural change. I meant, within my architecture (and, I assume, generalizing to most human architectures and most goals), we are, on some level, aware of the actual goal. There are occasional failure states (Alicorn mentioned iron deficiencies register as a craving for ice o.o), but these tend to tie in to low-level failures, not high-order goals like “make a paperclip”, and STILL we tend to manage to identify these and learn how to achieve our actual goals.
I’m not sure that awareness is needed for paperclip maximizing. For example, one might call fire a very good CO2 maximizer. Actually, I’m not even sure you can apply the word awareness to non-human-like optimizers.
“If we reprogrammed you to count paperclips instead”
This is a conversation about changing my core utility function / goals, and what you are discussing would be far more of an architectural change. I meant, within my architecture (and, I assume, generalizing to most human architectures and most goals), we are, on some level, aware of the actual goal. There are occasional failure states (Alicorn mentioned iron deficiencies register as a craving for ice o.o), but these tend to tie in to low-level failures, not high-order goals like “make a paperclip”, and STILL we tend to manage to identify these and learn how to achieve our actual goals.