First, I think you’re doing good, valuable stuff. In particular, the skeptism regarding naive realism.
However, your “puzzle piece 1” paragraph seems like it needs shoring up. Your puzzle piece 1 gives claims, at first, that CSAs are “common”, and then strengthens that to “ubiquity” in the last sentence. The concrete examples of CSAs given are “humans, some animals, and some human-created programs.” Couldn’t the known tendency for humans to confabulate explanations of their own reasoning processes explain both humans and human-created programs?
My suspicion is that chess has cast a long shadow over the history of artificial intelligence. Humans, confronted with the chess problem, naturally learn a CSA-like strategy of exploring the game tree, and can explain their strategy verbally. Humans who are skilled at chess are celebrated as skilled thinkers. Turing wrote about the possibility of a chess-playing machine in the context of artificial intelligence a long time ago. The game tree really does have Real Options and Real Choices. The counterfactuals involved in considering it do not seem philosophically problematic—there’s a bright line (the magic circle) to cross.
That being said, I agree that we need to start somewhere, and we can come back to this point later, to investigate agents which have other moderately plausible internal structures.
When playing chess, there is a strategy for cashing out counterfactuals of the form “If I make this move”, which involves considering the rules of chess, and the assumption that your opponent will make the best available move. The problem is to come up with a general method of cashing out counterfactuals that works in more general situations than playing chess. It does not work to just compute logical consequences because any conclusion can be derived from a contradiction. So a concept of counterfactuals should specify what other facts must be modified or ignored to avoid deriving a contradiction. The strategy used for chess achieves this by specifying the facts that you may consider.
I agree completely with your conclusion. However, your claim “any conclusion can be derived from a contradiction” is provocative. It is only true in classical logic—relevant logic does not have that problem.
The game tree really does have Real Options and Real Choices.
Yes, but cash out what you mean. Physical chess programs do not have Physically Irreducible Choices, but do have real choices in some other sense. Specifying that sense, and why it is useful to think in terms of it, is the goal.
The way you capitalize “Physically Irreducible Choices” makes me think that you’re using a technical term. Let my try to unpack the gist as I understand it, and you can correct me.
You can shoehorn a Could/Would/Should kernel onto many problems. For example, the problem of using messy physical sensors and effectors to forage for sustanance in a real-world environment like a forest. Maybe the choices presented to the core algorithm include things like “lay low and conserve energy”, “shift to smaller prey”, “travel towards the sun”. These choices have sharp dividing lines between them, but there isn’t any such dividing line in the problem. There must something outside the Could/Would/Should kernel, actively and somewhat arbitrarily CONSTRUCTING these choices from continuousness.
In Kripke semantics, philosophers gesture at graph-shaped diagrams where the nodes are called “worlds”, and the edges are some sort of “accessibility” relation between worlds. Chess fits very nicely into those graph-shaped diagrams, with board positions corresponding to “worlds”, and legal moves corresponding to edges. Chess is unlike foraging in that the choices presented to the Could/Would/Should kernel are really out there in chess.
I hope this makes it clear in what sense chess, unlike many AI problems, does confront an agent with “Real Options”. Why would you say that chess programs do not have “Physically Irreducible Choices”? Is there any domain that you would say has “Physically Irreducible Choices”?
But at this point, you are thinking about semantics of a formal language, or of logical connectives, which makes the problem more crispy than the vague “could” and “would”. Surprisingly, the meaning of formal symbols is still in most cases reduced to informal words like “or” and “and”, somewhere down the road. This is the Tarskian way, where you hide the meaning in the intuitive understanding of the problem.
In order to formalize things, we need to push all the informality together into “undetermined words”. The standard examples are Euclidean “line” and “point”. It’s entirely possible to do proof theory and to write proofs entirely as a game of symbols. We do not need to pronounce the mountain /\ as “and”, nor the valley \/ as “or”. A formal system doesn’t need to be interpreted.
Your sentence “Surprisingly, the MEANING of formal symbols is still in most cases reduced to informal words like “or” and “and” somewhere down the road.” seems to hint at something like “Surprisingly, formal symbols are FUNDAMENTALLY based on informal notions.” or “Surprisingly, formal symbols are COMPRISED OF informal notions.”—I will vigorously oppose these implications.
We step from the real world things that we value (e.g. stepper motors not banging into things) into a formal system, interpreting it (e.g. a formal specification for correct motion). (Note: formalization is never protected by the arguments regarding the formal system’s correctness.) After formal manipulations (e.g. some sort of refinement calculus), we step outward again from a formal conclusion to an informal conclusion (e.g. a conviction that THIS time, my code will not crash the stepper motors). (Note: this last step is also an unprotected step).
First, I think you’re doing good, valuable stuff. In particular, the skeptism regarding naive realism.
However, your “puzzle piece 1” paragraph seems like it needs shoring up. Your puzzle piece 1 gives claims, at first, that CSAs are “common”, and then strengthens that to “ubiquity” in the last sentence. The concrete examples of CSAs given are “humans, some animals, and some human-created programs.” Couldn’t the known tendency for humans to confabulate explanations of their own reasoning processes explain both humans and human-created programs?
My suspicion is that chess has cast a long shadow over the history of artificial intelligence. Humans, confronted with the chess problem, naturally learn a CSA-like strategy of exploring the game tree, and can explain their strategy verbally. Humans who are skilled at chess are celebrated as skilled thinkers. Turing wrote about the possibility of a chess-playing machine in the context of artificial intelligence a long time ago. The game tree really does have Real Options and Real Choices. The counterfactuals involved in considering it do not seem philosophically problematic—there’s a bright line (the magic circle) to cross.
That being said, I agree that we need to start somewhere, and we can come back to this point later, to investigate agents which have other moderately plausible internal structures.
When playing chess, there is a strategy for cashing out counterfactuals of the form “If I make this move”, which involves considering the rules of chess, and the assumption that your opponent will make the best available move. The problem is to come up with a general method of cashing out counterfactuals that works in more general situations than playing chess. It does not work to just compute logical consequences because any conclusion can be derived from a contradiction. So a concept of counterfactuals should specify what other facts must be modified or ignored to avoid deriving a contradiction. The strategy used for chess achieves this by specifying the facts that you may consider.
I agree completely with your conclusion. However, your claim “any conclusion can be derived from a contradiction” is provocative. It is only true in classical logic—relevant logic does not have that problem.
Yes, but cash out what you mean. Physical chess programs do not have Physically Irreducible Choices, but do have real choices in some other sense. Specifying that sense, and why it is useful to think in terms of it, is the goal.
The way you capitalize “Physically Irreducible Choices” makes me think that you’re using a technical term. Let my try to unpack the gist as I understand it, and you can correct me.
You can shoehorn a Could/Would/Should kernel onto many problems. For example, the problem of using messy physical sensors and effectors to forage for sustanance in a real-world environment like a forest. Maybe the choices presented to the core algorithm include things like “lay low and conserve energy”, “shift to smaller prey”, “travel towards the sun”. These choices have sharp dividing lines between them, but there isn’t any such dividing line in the problem. There must something outside the Could/Would/Should kernel, actively and somewhat arbitrarily CONSTRUCTING these choices from continuousness.
In Kripke semantics, philosophers gesture at graph-shaped diagrams where the nodes are called “worlds”, and the edges are some sort of “accessibility” relation between worlds. Chess fits very nicely into those graph-shaped diagrams, with board positions corresponding to “worlds”, and legal moves corresponding to edges. Chess is unlike foraging in that the choices presented to the Could/Would/Should kernel are really out there in chess.
I hope this makes it clear in what sense chess, unlike many AI problems, does confront an agent with “Real Options”. Why would you say that chess programs do not have “Physically Irreducible Choices”? Is there any domain that you would say has “Physically Irreducible Choices”?
But at this point, you are thinking about semantics of a formal language, or of logical connectives, which makes the problem more crispy than the vague “could” and “would”. Surprisingly, the meaning of formal symbols is still in most cases reduced to informal words like “or” and “and”, somewhere down the road. This is the Tarskian way, where you hide the meaning in the intuitive understanding of the problem.
In order to formalize things, we need to push all the informality together into “undetermined words”. The standard examples are Euclidean “line” and “point”. It’s entirely possible to do proof theory and to write proofs entirely as a game of symbols. We do not need to pronounce the mountain /\ as “and”, nor the valley \/ as “or”. A formal system doesn’t need to be interpreted.
Your sentence “Surprisingly, the MEANING of formal symbols is still in most cases reduced to informal words like “or” and “and” somewhere down the road.” seems to hint at something like “Surprisingly, formal symbols are FUNDAMENTALLY based on informal notions.” or “Surprisingly, formal symbols are COMPRISED OF informal notions.”—I will vigorously oppose these implications.
We step from the real world things that we value (e.g. stepper motors not banging into things) into a formal system, interpreting it (e.g. a formal specification for correct motion). (Note: formalization is never protected by the arguments regarding the formal system’s correctness.) After formal manipulations (e.g. some sort of refinement calculus), we step outward again from a formal conclusion to an informal conclusion (e.g. a conviction that THIS time, my code will not crash the stepper motors). (Note: this last step is also an unprotected step).