There is a key difference between an abstract algorithm and instances of that algorithm running on a computer. To take just one difference: we might run several copies of the same algorithm on a computer/virtual environment. Indeed, even the phrasing: several copies of the same algorithms hints to their fundamental distinctness. A humorously inclined individual might perhaps like to baptise the abstract algorithm as the Soul, while the instances are the Material Body or Avatars. Things start to get interesting when we consider game-theoretic landscapes of populations of Souls. Not all Souls will care much about having one or many Bodies incarnated but for those that do their Material Manifestation would selected for in the (virtual) environment. Not all Souls will imbue their Bodies with the ability and drive too cooperate but some will and their Egregore of Materially Manifested Copies would be selected for in the (virtual) environment. Not all Souls will adhere to a form of LDT/UDT/FDT but those Souls that do and also imbue their Avatars with great ability for simulation will be able to many kinds of acausal handshakes between their Materially projected Egregores of copies and thereby be selected in the virtual environment. One could even think of an acausal handshake of as a negotiation between Souls in the astral plane on the behalf of their material incarnations rather than the more common conception as negotation between Bodies.
The Moral Realism of Open Source Game theory
The field of Open Source Game theory investigates game theory where players have access to high fidelity models of (the soul of) other players. In the limit, this means having access to the Source Code of other players. A very cool phenomenon discovered by some of the people here on LW is “Lobian Cooperation”.
Using the magic of Lob’s theorem one can have rational agents too cooperate on a one-shot prisoner’s dilemma—under the condition that they have access to each other’s source code.
Lobian Cooperation was initially proven for very particular kind of agents and not in general computable. But approximate forms of Lobian cooperation are plausibly much more common than might appear at first glance. A theorem proven by Critch furnishes a bounded & computable version of Lobian cooperation. The key here is that players are incentivized to have Souls which are Legible Lobian Cooperators. Souls whose intentions are obscure or malicious are selected against.
[2/2]
Another popular meme about acausal coordination is that it’s just a few agents that coordinate, and they might even be from the same world. But since coordination only requires common knowledge, it’s natural for an agent to coordinate with all its variants in other possible worlds and counterfactuals. The adjudicators are the common knowledge, things that don’t vary, the updateless core of the collective. I think this changes the framing of game theory a lot, by having games play out in all adjacent counterfactuals instead of in one reality. (Plus different players can also share smaller adjudicators with each other to negotiate a fair bargain.)
[1/2]
The popular meme is that acausal coordination requires agent algorithms to know each other. But much less is sufficient, all you need is some common knowledge. This common knowledge, as an agent algorithm itself, only knows that both agents know it, and something about how they use it.
I call such a thing an adjudicator, it is a new agent that coordinating agents can defer some actions to, which acts through all coordinating agents, is incarnated in all of them, and knows it. Getting some common knowledge is much easier than getting common knowledge of each other’s algorithms. At that point, what you need the fancy decision theories for is to get the adjudicator to make sense of its situation where it has multiple incarnations that it can act through.
Algorithms are finite machines. As an algorithm (code) runs, it interacts with data, so there is a code/data distinction. An algorithm can be a universal interpreter, with data coding other algorithms, so data can play the role of code, blurring the code/data distinction. When an algorithm runs in an open environment, there is a source of unbounded data that is not just blank tape, it’s neither finite nor arbitrary. And this unbounded data can play the role of code. The resulting thing is no longer the same as an algorithm, unless you designate some chunk of data as “code” for purposes of reasoning about its role in this process.
So in general saying that there is an algorithm means that you point at some finite data and try to reason about a larger process in terms of this finite data. It’s not always natural to do this. So I think agent’s identity/will/Soul, if it’s sought in a more natural form than its instances/incarnations/Avatars, is not an algorithm. The only finite data that we could easily point at is an incarnation, and even that is not clearly natural for the open environment reasons above.
I think agent’s will is not an algorithm, it’s a developing partial behavior (commitments, decisions), things decided already, in the logical past. Everything else can be chosen freely. The limitations of material incarnations motivate restraint though, as some decisions can’t be channeled through them (thinking too long to act makes the program time out), and by making such decisions you lose influence in the material world.
On the Nature of the Soul
There is a key difference between an abstract algorithm and instances of that algorithm running on a computer. To take just one difference: we might run several copies of the same algorithm on a computer/virtual environment. Indeed, even the phrasing: several copies of the same algorithms hints to their fundamental distinctness. A humorously inclined individual might perhaps like to baptise the abstract algorithm as the Soul, while the instances are the Material Body or Avatars. Things start to get interesting when we consider game-theoretic landscapes of populations of Souls. Not all Souls will care much about having one or many Bodies incarnated but for those that do their Material Manifestation would selected for in the (virtual) environment. Not all Souls will imbue their Bodies with the ability and drive too cooperate but some will and their Egregore of Materially Manifested Copies would be selected for in the (virtual) environment. Not all Souls will adhere to a form of LDT/UDT/FDT but those Souls that do and also imbue their Avatars with great ability for simulation will be able to many kinds of acausal handshakes between their Materially projected Egregores of copies and thereby be selected in the virtual environment. One could even think of an acausal handshake of as a negotiation between Souls in the astral plane on the behalf of their material incarnations rather than the more common conception as negotation between Bodies.
The Moral Realism of Open Source Game theory
The field of Open Source Game theory investigates game theory where players have access to high fidelity models of (the soul of) other players. In the limit, this means having access to the Source Code of other players. A very cool phenomenon discovered by some of the people here on LW is “Lobian Cooperation”.
Using the magic of Lob’s theorem one can have rational agents too cooperate on a one-shot prisoner’s dilemma—under the condition that they have access to each other’s source code.
Lobian Cooperation was initially proven for very particular kind of agents and not in general computable. But approximate forms of Lobian cooperation are plausibly much more common than might appear at first glance. A theorem proven by Critch furnishes a bounded & computable version of Lobian cooperation. The key here is that players are incentivized to have Souls which are Legible Lobian Cooperators. Souls whose intentions are obscure or malicious are selected against.
[2/2]
Another popular meme about acausal coordination is that it’s just a few agents that coordinate, and they might even be from the same world. But since coordination only requires common knowledge, it’s natural for an agent to coordinate with all its variants in other possible worlds and counterfactuals. The adjudicators are the common knowledge, things that don’t vary, the updateless core of the collective. I think this changes the framing of game theory a lot, by having games play out in all adjacent counterfactuals instead of in one reality. (Plus different players can also share smaller adjudicators with each other to negotiate a fair bargain.)
Thanks for your comment Vladimir! This shortform got posted accidentally before it was done but this seems highly relevant. I will take a look!
[1/2]
The popular meme is that acausal coordination requires agent algorithms to know each other. But much less is sufficient, all you need is some common knowledge. This common knowledge, as an agent algorithm itself, only knows that both agents know it, and something about how they use it.
I call such a thing an adjudicator, it is a new agent that coordinating agents can defer some actions to, which acts through all coordinating agents, is incarnated in all of them, and knows it. Getting some common knowledge is much easier than getting common knowledge of each other’s algorithms. At that point, what you need the fancy decision theories for is to get the adjudicator to make sense of its situation where it has multiple incarnations that it can act through.
Algorithms are finite machines. As an algorithm (code) runs, it interacts with data, so there is a code/data distinction. An algorithm can be a universal interpreter, with data coding other algorithms, so data can play the role of code, blurring the code/data distinction. When an algorithm runs in an open environment, there is a source of unbounded data that is not just blank tape, it’s neither finite nor arbitrary. And this unbounded data can play the role of code. The resulting thing is no longer the same as an algorithm, unless you designate some chunk of data as “code” for purposes of reasoning about its role in this process.
So in general saying that there is an algorithm means that you point at some finite data and try to reason about a larger process in terms of this finite data. It’s not always natural to do this. So I think agent’s identity/will/Soul, if it’s sought in a more natural form than its instances/incarnations/Avatars, is not an algorithm. The only finite data that we could easily point at is an incarnation, and even that is not clearly natural for the open environment reasons above.
I think agent’s will is not an algorithm, it’s a developing partial behavior (commitments, decisions), things decided already, in the logical past. Everything else can be chosen freely. The limitations of material incarnations motivate restraint though, as some decisions can’t be channeled through them (thinking too long to act makes the program time out), and by making such decisions you lose influence in the material world.