Yes! It’s interesting how the concepts of agency and choice seem so natural and ingrained for us, humans, that we are often tempted to think that they describe reality deeper than they really do. We seem to see agents, preferences, goals, and utilities everywhere, but what if these concepts are not particularly relevant for the actual mechanism of decision-making even from the first-person view?
What if much of the feeling of choice and agency is actually a social adaptation, a storytelling and explanatory device that allows us to communicate and cooperate with other humans (and, perhaps more peculiarly, with our future selves)? While it feels like we are making a choice due to reasons, there are numerous experiments that point to the explanatory, after-the-fact role of reasoning in decision-making. Yes, abstract reasoning also allows us to model conceptually a great many things, but those models just serve as additional data inputs to the true hidden mechanism of decision-making.
It shouldn’t be surprising then if for other minds [such as AGI], not having this primal adaptation to being modeled by others, decision-making would feel nothing like choice or agency. We already see it in our simpler AIs—choice and reasoning mean nothing to a health diagnostic system, it simply computes, it is only for us, humans, to feel like we understand “why” it made a particular choice, we have to add an explanatory module that gives us “reasons” but is completely unnecessary for the decision-making itself!
Yes! It’s interesting how the concepts of agency and choice seem so natural and ingrained for us, humans, that we are often tempted to think that they describe reality deeper than they really do. We seem to see agents, preferences, goals, and utilities everywhere, but what if these concepts are not particularly relevant for the actual mechanism of decision-making even from the first-person view?
What if much of the feeling of choice and agency is actually a social adaptation, a storytelling and explanatory device that allows us to communicate and cooperate with other humans (and, perhaps more peculiarly, with our future selves)? While it feels like we are making a choice due to reasons, there are numerous experiments that point to the explanatory, after-the-fact role of reasoning in decision-making. Yes, abstract reasoning also allows us to model conceptually a great many things, but those models just serve as additional data inputs to the true hidden mechanism of decision-making.
It shouldn’t be surprising then if for other minds [such as AGI], not having this primal adaptation to being modeled by others, decision-making would feel nothing like choice or agency. We already see it in our simpler AIs—choice and reasoning mean nothing to a health diagnostic system, it simply computes, it is only for us, humans, to feel like we understand “why” it made a particular choice, we have to add an explanatory module that gives us “reasons” but is completely unnecessary for the decision-making itself!