The way you capitalize “Physically Irreducible Choices” makes me think that you’re using a technical term. Let my try to unpack the gist as I understand it, and you can correct me.
You can shoehorn a Could/Would/Should kernel onto many problems. For example, the problem of using messy physical sensors and effectors to forage for sustanance in a real-world environment like a forest. Maybe the choices presented to the core algorithm include things like “lay low and conserve energy”, “shift to smaller prey”, “travel towards the sun”. These choices have sharp dividing lines between them, but there isn’t any such dividing line in the problem. There must something outside the Could/Would/Should kernel, actively and somewhat arbitrarily CONSTRUCTING these choices from continuousness.
In Kripke semantics, philosophers gesture at graph-shaped diagrams where the nodes are called “worlds”, and the edges are some sort of “accessibility” relation between worlds. Chess fits very nicely into those graph-shaped diagrams, with board positions corresponding to “worlds”, and legal moves corresponding to edges. Chess is unlike foraging in that the choices presented to the Could/Would/Should kernel are really out there in chess.
I hope this makes it clear in what sense chess, unlike many AI problems, does confront an agent with “Real Options”. Why would you say that chess programs do not have “Physically Irreducible Choices”? Is there any domain that you would say has “Physically Irreducible Choices”?
The way you capitalize “Physically Irreducible Choices” makes me think that you’re using a technical term. Let my try to unpack the gist as I understand it, and you can correct me.
You can shoehorn a Could/Would/Should kernel onto many problems. For example, the problem of using messy physical sensors and effectors to forage for sustanance in a real-world environment like a forest. Maybe the choices presented to the core algorithm include things like “lay low and conserve energy”, “shift to smaller prey”, “travel towards the sun”. These choices have sharp dividing lines between them, but there isn’t any such dividing line in the problem. There must something outside the Could/Would/Should kernel, actively and somewhat arbitrarily CONSTRUCTING these choices from continuousness.
In Kripke semantics, philosophers gesture at graph-shaped diagrams where the nodes are called “worlds”, and the edges are some sort of “accessibility” relation between worlds. Chess fits very nicely into those graph-shaped diagrams, with board positions corresponding to “worlds”, and legal moves corresponding to edges. Chess is unlike foraging in that the choices presented to the Could/Would/Should kernel are really out there in chess.
I hope this makes it clear in what sense chess, unlike many AI problems, does confront an agent with “Real Options”. Why would you say that chess programs do not have “Physically Irreducible Choices”? Is there any domain that you would say has “Physically Irreducible Choices”?