Your idea of “genetic decision theory” is excellent because it provides physical grounding for FDT. It made me realise that FDT should be best considered as group decision theory, where the group can be a pair of people (a relationship), a family, a community, a genus, a nation, the humanity; a human-AI pair, a group of people with AIs, a group of AIs, etc.
As I understand your post, I would title it “FDT and CDT/EDT prescribe the decision procedures for agents of different system levels”: FDT prescribes decision procedures for groups of agents, while CDT and EDT prescribe decision procedures for individual agents in these groups. If the outcomes of these decision procedures don’t match, this represents an inter-scale system conflict/frustration (see “Towards a Theory of Evolution as Multilevel Learning”, section 3 “Fundamental evolutionary phenomena”, E2. “Frustration”). Other examples of such frustrations are the principal—agent problem, the cancer cell—organism conflict, a man against society or state, etc. (these are just examples of stark conflicts; to some degree, almost all parts of whatever there is are in some frustration with the supra-system containing it).
Agents (either at the lower level or the higher level or even outside of this hierarchy, i. e. “external designers”) should seek to minimise these conflicts via innovation and knowledge (technological, biological, cultural, political, legal, etc.). In the Twin Prisoner Dilemma case, a genus (a group) should innovate so that its members derive maximum utility when pairs of people from the genus happen in a prison. There are multiple ways to do this: a genetic mutation (either randomly selected, or artificially introduced by genetic engineers) which hardwires the brains of the members of the genus in a way that they always cooperate (a biological innovation), or the genus patriarch (if exists; actually, there is a real-life example: mafia, where there are mafia bosses, and those members actually risk going to jail) instituting a rule with very hard punishment for disobeying it (a social innovation), or a law which allows family members not to testify against one another without negative consequences (a legal innovation), or a smart contract with financial precommitments (a technological innovation). Or spreading the idea of FDT among the group members, which is also an act of social entrepreneurship/social innovation endeavour, albeit not a very effective one, I suspect, unless the group in question is a closely-knit community of rationalists :)
One could respond that it is generally difficult to identify the desiderata, and in most cases, all we have is intuitions over decision problems that are not easily reducible. In particular, it might not be possible to tell if some intuition has to do with ontology or decision theory. For example, perhaps one just wants to take mutual cooperation in the Twin Prisoner’s Dilemma as a primitive, and until one has figured out why this is a desideratum (and thus figured out if it is about ontology or decision theory), comparisons of decision theories that merely involve ontological differences do in fact carry some information about what ontology is reasonable.[9] I am somewhat sympathetic to the argument in and of itself, although I disagree about the extent to which our intuitions are irreducible to such an extent that we can not tell whether they are about ontology or decision theory.
It’s hard for me to understand what you say in this passage, but if you are hinting at the questions “What agent am I?”, “Where does my boundary lie/Markov blanket end?”, “Whom I should decide for?”, then a psychological answer to this question that the agent continually tests this (i. e., conducts physical experiments) and forms a belief about where his circle of control ends. Thus, a mafia boss believes he controls the group, while an ordinary member does not. A physical theory purporting to answer these questions objectively is minimal physicalism (see “Minimal physicalism as a scale-free substrate for cognition and consciousness”, specifically discussing the question of boundaries and awareness).
Physicalist agent ontology vs. algorithmic/logical agent ontology
I believe there is a methodological problem with “algorithmic/logical ontology” as a substrate for a decision theory, and FDT as an instance of such a theory: decision theory is a branch of rationality, which itself is a normative discipline applying to particular kinds of physical systems, and thus it must be based on physics. Algorithms are mathematical objects, they can only describe physical objects (information bearers), but don’t belong to the physical world themselves and thus cannot cause something to happen in the physical world (again, information bearers can).
Thus, any decision theory should deal only with physical objects in its ontology, which could be “the brains of the members of the group” (information bearers of algorithms, for example, the FDT algorithm), but not “algorithms” directly, in the abstract.
The other way to put the above point is that I think that FDT’s attempt to escape the causal graph is methodologically nonsensical.
The following questions then arise:
Can we meaningfully compare ontologies in the first place?
If yes, what makes one ontology preferable to another?
I think these are difficult questions, but ultimately I think that we probably can compare ontologies; some ontologies are simply more reasonable than others, and they do not simply correspond to “different ways of looking at the world” and that’s that.
In light of what I have written above, I think these two questions should be replaced with a single one: What systems should be subjects of our (moral) concern? I. e., in the Twin Prisoners Dilemma, if a prisoner is concerned about his group (genus), he cooperates, otherwise, he doesn’t. This question has an extremely long history and amount of writing on it, e. g. are nations valuable? Are states valuable? Ecosystems? Cells in our organism? And the paper “Minimal physicalism as a scale-free substrate for cognition and consciousness” also introduces an interesting modern twist on this question, namely, the conjecture that consciousness is scale-free.
For example, one might argue that ‘agency’ is a high-level emergent phenomenon and that a reductionist physicalist ontology might be too “fine-grained” to capture what we care about, whilst the algorithmic conception abstracts away the correct amount of details.
Again, in the context of minimal physicalism, I think we should best dissect “agency” into more clearly definable, scale-free properties of physical systems (such as autonoetic awareness, introduced in the paper, but also others).
Your idea of “genetic decision theory” is excellent because it provides physical grounding for FDT. It made me realise that FDT should be best considered as group decision theory, where the group can be a pair of people (a relationship), a family, a community, a genus, a nation, the humanity; a human-AI pair, a group of people with AIs, a group of AIs, etc.
As I understand your post, I would title it “FDT and CDT/EDT prescribe the decision procedures for agents of different system levels”: FDT prescribes decision procedures for groups of agents, while CDT and EDT prescribe decision procedures for individual agents in these groups. If the outcomes of these decision procedures don’t match, this represents an inter-scale system conflict/frustration (see “Towards a Theory of Evolution as Multilevel Learning”, section 3 “Fundamental evolutionary phenomena”, E2. “Frustration”). Other examples of such frustrations are the principal—agent problem, the cancer cell—organism conflict, a man against society or state, etc. (these are just examples of stark conflicts; to some degree, almost all parts of whatever there is are in some frustration with the supra-system containing it).
Agents (either at the lower level or the higher level or even outside of this hierarchy, i. e. “external designers”) should seek to minimise these conflicts via innovation and knowledge (technological, biological, cultural, political, legal, etc.). In the Twin Prisoner Dilemma case, a genus (a group) should innovate so that its members derive maximum utility when pairs of people from the genus happen in a prison. There are multiple ways to do this: a genetic mutation (either randomly selected, or artificially introduced by genetic engineers) which hardwires the brains of the members of the genus in a way that they always cooperate (a biological innovation), or the genus patriarch (if exists; actually, there is a real-life example: mafia, where there are mafia bosses, and those members actually risk going to jail) instituting a rule with very hard punishment for disobeying it (a social innovation), or a law which allows family members not to testify against one another without negative consequences (a legal innovation), or a smart contract with financial precommitments (a technological innovation). Or spreading the idea of FDT among the group members, which is also an act of social entrepreneurship/social innovation endeavour, albeit not a very effective one, I suspect, unless the group in question is a closely-knit community of rationalists :)
It’s hard for me to understand what you say in this passage, but if you are hinting at the questions “What agent am I?”, “Where does my boundary lie/Markov blanket end?”, “Whom I should decide for?”, then a psychological answer to this question that the agent continually tests this (i. e., conducts physical experiments) and forms a belief about where his circle of control ends. Thus, a mafia boss believes he controls the group, while an ordinary member does not. A physical theory purporting to answer these questions objectively is minimal physicalism (see “Minimal physicalism as a scale-free substrate for cognition and consciousness”, specifically discussing the question of boundaries and awareness).
I believe there is a methodological problem with “algorithmic/logical ontology” as a substrate for a decision theory, and FDT as an instance of such a theory: decision theory is a branch of rationality, which itself is a normative discipline applying to particular kinds of physical systems, and thus it must be based on physics. Algorithms are mathematical objects, they can only describe physical objects (information bearers), but don’t belong to the physical world themselves and thus cannot cause something to happen in the physical world (again, information bearers can).
Thus, any decision theory should deal only with physical objects in its ontology, which could be “the brains of the members of the group” (information bearers of algorithms, for example, the FDT algorithm), but not “algorithms” directly, in the abstract.
The other way to put the above point is that I think that FDT’s attempt to escape the causal graph is methodologically nonsensical.
In light of what I have written above, I think these two questions should be replaced with a single one: What systems should be subjects of our (moral) concern? I. e., in the Twin Prisoners Dilemma, if a prisoner is concerned about his group (genus), he cooperates, otherwise, he doesn’t. This question has an extremely long history and amount of writing on it, e. g. are nations valuable? Are states valuable? Ecosystems? Cells in our organism? And the paper “Minimal physicalism as a scale-free substrate for cognition and consciousness” also introduces an interesting modern twist on this question, namely, the conjecture that consciousness is scale-free.
Again, in the context of minimal physicalism, I think we should best dissect “agency” into more clearly definable, scale-free properties of physical systems (such as autonoetic awareness, introduced in the paper, but also others).