You are addressing the clothes and telling them they have no emperor. They can’t hear you.
But Dennett is sort of besides the point here. I can build a simple agent ecosystem in LISP, and nobody would suggest there is anything conscious there. “Agent” talk as applied to such a LISP program would just be a useful modeling technique. An “agent” could just be “something with a utility function that can act,” not “conscious self.”
In fact, in the kinds of dilemmas humans face that the OP discusses, often some of the “agents” in question are something very old and pre-verbal and (regardless of your stance on consciousness) not very conscious at all. This does not prevent them from leaving a large footprint on our mental landscape.
It’s multiple agents with their own preferences fighting for the mic. One agent with a loop is not a good model here, imo.
I disagree; I think that rather than multiple agents, one should self-model as zero agents.
Rather than the expected link of the blue-minimizing robot, I will instead link you somewhere else.
You are addressing the clothes and telling them they have no emperor. They can’t hear you.
But Dennett is sort of besides the point here. I can build a simple agent ecosystem in LISP, and nobody would suggest there is anything conscious there. “Agent” talk as applied to such a LISP program would just be a useful modeling technique. An “agent” could just be “something with a utility function that can act,” not “conscious self.”
In fact, in the kinds of dilemmas humans face that the OP discusses, often some of the “agents” in question are something very old and pre-verbal and (regardless of your stance on consciousness) not very conscious at all. This does not prevent them from leaving a large footprint on our mental landscape.