(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
- Saharon Shelah
As a true-born Dutchman I endorse Crocker’s rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
Dear Vaniver, thank you for sharing your thoughts. You bring up some important points.
The Intentional Agency Experiment is an idealisation, a model that tries to capture what ‘intention’ is supposed to be. How to translate this to the real world is often ambiguous and sometimes difficult. Similar issues crop up all over applications of pure science&mathematics. ‘Real’ applications usually involve implicitly and explicitly many different theoretical frameworks and highly simplified models as well as practical knowledge, various mechanical tricks, approximation schemes, etc.
When I ask that R has a set of actions, this only makes sense within a certain framework. A rock does not have ‘actions’, and neither does a human within a suitably deterministic framework. So we have to be careful; the setup only works when we have a model of the world that is suitably coarse-grained and allows for actions & counterfactuals. Like causality, intention & agency seems to me intensely tied up with an incomplete and coarse-grained model of the world.
To clear any misunderstandings; if we have a physical object that at our level of coarse-graining may indeterministically evolve, for example a rock balancing on a mountain peak, we would not say it has possible actions. One could be under the impression that the actions that are considered are actually instantiated; but of course that is not what is meant. In the Intentional Agency Experiment R is only asked to give an action given a counterfactual (hypothetical) world. If you’d like you can read ‘potential action’ everywhere where I write ‘action’. Actions are defined when we have an agent that we can ask to consider hypothetical scenarios and outputs a certain ‘potential action’ given this counterfactual world.
We cannot ask a rock to consider hypothetical scenarios. Neither can we ask an ant to do so. Only a human or sophisticated robot can. Even a human or sophisticated robot will usually not consider just the ‘clean’ counterfactual P(G|A)=0 but will also implicitly assume many other facts about the world. When we ask the R to consider P(G|A)=0 we don’t want it to assume other facts about W. So one should consider a world where the action A is instantiated but an omnipotent being keeps G from happening at the last possible moment.
In practice, it is frequently difficult to ask agents to consider hypothetical counterfactuals and impossible to have them consider ‘clean’ counterfactuals (where all else is held fixed). Nevertheless, just like in Economics we assume Ceteris Paribus, considering highly idealised models&situations often turns out to be a useful tool.
Moreover, we may try to approximate/instantiate the Intentional Agency Experiment in the real world. However, sometimes those approximations may not be the ‘right’ implementation. As mentioned, an ant cannot be asked to consider hypothetical scenarios directly. Yet, we may try to ‘approximate’ the piece of information P(G|A)=0 by putting an obstacle in its way. If the ant tries and succeeds to overcome the obstacle the conclusion shouldn’t be that ‘it chose a different action’; rather the correct conclusion was that putting this obstacle in its way was not a sufficient implementation of the mathematical act of asking R to consider P(G|A)=0 .
Yes, in practice situations arise where the implementation of a model can be ambiguous, very hard to implement etc. These are exactly the problems engineers and experimental physicists deal with; and these are interesting and important problems. But it should not prevent us from constructing highly simplified models.