I think some concreteness might be useful here. When I write code (no pretense at AI here), I often write algorithms that take different actions depending on the circumstances. I can’t recall a time when I collected possible steps, evaluated them, and executed the possibility with the highest utility. Instead I, as the programmer, attempt to divide the world into disjoint possibilities, write an evaluation procedure that will distinguish between them (if-then-else, or using OO I ensure that the right kind of object will be acting at the time), and design the code so that it will take a specific action that I expected would make sense for that context when that is the path chosen. There’s little of “could” or “should” here.
On the other hand, when I walk into the kitchen thinking thoughts of dessert, I generate possibilities based on my recollection of what’s in the fridge and the cupboards or sometimes based on a search of those locations. I then think about which will taste better, which I’ve had more recently, which is getting old and needs to be used up, and then pick one (without justifying the choice based on the evaluations.) There seems to be lots of CSA going on here, even though it seems like a simple, highly constrained problem area.
When human chess masters play, they retain more could-ness in their evaluations if they consider the possibility of not making the “optimal” move in order to psych out their opponents. I don’t know whether the chess-playing automatons consider those possibilities. Without it, you could say they are constrained to make the move that leaves them in the best position according to their evaluation metric. So even though they do explicitly evaluate alternatives, they have a single metric for making a choice. The masters I just described have multiple metrics and a vague approach to combining them, but that’s the essence of good game playing.
Bottom line? When I’m considering a big decision, I want to leave more variables open, to simulate more possible worlds and the consequences of my choices. When I’m on well-trodden ground, I hope for an optimized decision procedure that knows what to do and has simple rules that allow it to determine which pre-analyzed direction is the right one. The reason we want AIs to be open in this way is that we’re hoping they have the breadth of awareness to tackle problems that they haven’t explicitly been programmed for. I don’t think you (the programmer) can leave out the could-ness unless you can enumerate the alternative actions and program in the relevant distinctions ahead of time.
I think some concreteness might be useful here. When I write code (no pretense at AI here), I often write algorithms that take different actions depending on the circumstances. I can’t recall a time when I collected possible steps, evaluated them, and executed the possibility with the highest utility. Instead I, as the programmer, attempt to divide the world into disjoint possibilities, write an evaluation procedure that will distinguish between them (if-then-else, or using OO I ensure that the right kind of object will be acting at the time), and design the code so that it will take a specific action that I expected would make sense for that context when that is the path chosen. There’s little of “could” or “should” here.
On the other hand, when I walk into the kitchen thinking thoughts of dessert, I generate possibilities based on my recollection of what’s in the fridge and the cupboards or sometimes based on a search of those locations. I then think about which will taste better, which I’ve had more recently, which is getting old and needs to be used up, and then pick one (without justifying the choice based on the evaluations.) There seems to be lots of CSA going on here, even though it seems like a simple, highly constrained problem area.
When human chess masters play, they retain more could-ness in their evaluations if they consider the possibility of not making the “optimal” move in order to psych out their opponents. I don’t know whether the chess-playing automatons consider those possibilities. Without it, you could say they are constrained to make the move that leaves them in the best position according to their evaluation metric. So even though they do explicitly evaluate alternatives, they have a single metric for making a choice. The masters I just described have multiple metrics and a vague approach to combining them, but that’s the essence of good game playing.
Bottom line? When I’m considering a big decision, I want to leave more variables open, to simulate more possible worlds and the consequences of my choices. When I’m on well-trodden ground, I hope for an optimized decision procedure that knows what to do and has simple rules that allow it to determine which pre-analyzed direction is the right one. The reason we want AIs to be open in this way is that we’re hoping they have the breadth of awareness to tackle problems that they haven’t explicitly been programmed for. I don’t think you (the programmer) can leave out the could-ness unless you can enumerate the alternative actions and program in the relevant distinctions ahead of time.