It appears to me that these kinds of questions are impossible to coherently resolve without making reference to some specific AGI architecture. When “the AI” is an imaginary construct whose structure is only partially shared between the different people imagining it, we can have all the vague arguments we like and arrive to no real answers whatsoever. When it’s an actual object mathematically specified, we can resolve the issue by just looking at the math, usually without even having to implement the described “AI”.
Therefore, I recommend we stop arguing about things we can’t specify.
At the moment, people do not program AIs with explicit utility functions, but program them to pursue certain limited goals as in the example.
At the moment, people do not program AGI agents. Period. Whatsoever. There aren’t any operational AGIs except of the most primitive, infantile kind used as reinforcement-learning experiments in places like DeepMind.
It appears to me that these kinds of questions are impossible to coherently resolve without making reference to some specific AGI architecture. When “the AI” is an imaginary construct whose structure is only partially shared between the different people imagining it, we can have all the vague arguments we like and arrive to no real answers whatsoever. When it’s an actual object mathematically specified, we can resolve the issue by just looking at the math, usually without even having to implement the described “AI”.
Therefore, I recommend we stop arguing about things we can’t specify.
At the moment, people do not program AGI agents. Period. Whatsoever. There aren’t any operational AGIs except of the most primitive, infantile kind used as reinforcement-learning experiments in places like DeepMind.