Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It’s unclear to me how to combine this with possibility of reasoning about such agents (by other agents).
That’s very speculative, I don’t remember any nontrivial results in this vein so far. Maybe the writeup shouldn’t need to wait until this gets cleared up.
Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It’s unclear to me how to combine this with possibility of reasoning about such agents (by other agents).
That’s very speculative, I don’t remember any nontrivial results in this vein so far. Maybe the writeup shouldn’t need to wait until this gets cleared up.