Also, can we even get ‘real goals’ like this? We’re threading onto land of potentially proposing something as silly as blue unicorns on back side of the moon. We use goals to model other human intelligences, that is built into our language, that’s how we imagine other agents, that’s how you predict a wolf, a cat, another ape, etc. The goals are really easy within imagination (which is not reductionist and where the true paperclip count exists as a property of the ‘world’). Outside imagination, though...
Also, can we even get ‘real goals’ like this? We’re threading onto land of potentially proposing something as silly as blue unicorns on back side of the moon. We use goals to model other human intelligences, that is built into our language, that’s how we imagine other agents, that’s how you predict a wolf, a cat, another ape, etc. The goals are really easy within imagination (which is not reductionist and where the true paperclip count exists as a property of the ‘world’). Outside imagination, though...