I had an idea for Wei Dai’s “What is Probability, Anyway?,” but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn’t been brought up before, or did I just write up a “durr”? (If it’s not, I’ll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity which observed E) / (total number of successors to the entity). Observations constrain the entity to particular universes, as do decisions, but in different ways; so that we occasionally find ourselves on either side of an observation, but never see ourselves move counter to a decision (except in the sense that what we decide as a brain is not always what we consciously decide.)
Fair warning: I attempted to formalize the concept, but as a undergrad non-math major, the result may look less than impressive to trained eyes. My apologies if this is the case.
The idea is as follows:
Define a conscious observer as some algorithm P(0). P(0) computes on available data and returns a new observer P(1) to act on new available data. Note that it is possible to generate a set of all possible outputs P(n); on human timescales and under the limitation of a human lifetime, it is plausible that such a set would match with the intuitive concept of a “character” who undergoes development.
Assume many-worlds. There are now a very large number of identical algorithms P(n) scattered across the many worlds. Since P(n)=P(n), no local experiment can distinguish between algorithms; therefore scratch the concept of them being separate entirely, and consider them all to be a single conscious entity P.
P does not know which universe it is in (by definition) to start. It can change this by making an observation: it updates itself on sensory data. Regardless of which result is recorded, P(n+1) has lesser measure than P(n): P(n+1) occupies precisely half of the universes P does. P(n+1) has learned more about the universe it is in, so ist space of possible universes has diminished.
An example: consider the example of observing a fair coin—for example, observing the spin of an electron. All of P(n) runs the same algorithm: read the single bit corresponding to the spin, add bit to memory with a suitable wrapper: “Result of experiment: 0/1”. This is the new P(n+1), which regardless of result is a new entity. Let us designate successors to P(n) which observed a positive spin Q+, and those which observed a negative spin Q-. Since Q+ and Q- are not equal—they differ in one bit—they are not the same entity, even though they are both successors to (and are part of the same “character” as) P(n). Thus each of Q+/- observe only one version of the experiment.
As a lead-in to decision-making: consider what would happen if P(n) had precommitted to producing Q+, and never produced a Q-. Then the universe “Character P observes a negative spin” is inconsistent, and does not exist (barring, say, a random cosmic ray changing the algorithm.) Such a mind would never observe a spin-down event. This is distinct from quantum immortality/suicide—whereas a quantum suicide leaves behind a “world without you,” precommitting in this way means that a given world is inconsistent and never existed in the first place. Barring improbability, no successor of P(n) observes a spin-down event.
In this sense, we can define a decision as a “false observation.” P(n) decides to cause event E by choosing to only output successor functions in which event E is observed. (Note that this wording is excessively confusing; a brain which outputs a “move arm” signal is highly unlikely to be in a state where the arm does not move, and so can be said to have “decided” to move the arm.) A decision, then, as expected, also narrows the field of possible universes—but, at least hypothetically, in a purposeful manner.
I had an idea for Wei Dai’s “What is Probability, Anyway?,” but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn’t been brought up before, or did I just write up a “durr”? (If it’s not, I’ll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity which observed E) / (total number of successors to the entity). Observations constrain the entity to particular universes, as do decisions, but in different ways; so that we occasionally find ourselves on either side of an observation, but never see ourselves move counter to a decision (except in the sense that what we decide as a brain is not always what we consciously decide.)
Fair warning: I attempted to formalize the concept, but as a undergrad non-math major, the result may look less than impressive to trained eyes. My apologies if this is the case.
The idea is as follows:
Define a conscious observer as some algorithm P(0). P(0) computes on available data and returns a new observer P(1) to act on new available data. Note that it is possible to generate a set of all possible outputs P(n); on human timescales and under the limitation of a human lifetime, it is plausible that such a set would match with the intuitive concept of a “character” who undergoes development.
Assume many-worlds. There are now a very large number of identical algorithms P(n) scattered across the many worlds. Since P(n)=P(n), no local experiment can distinguish between algorithms; therefore scratch the concept of them being separate entirely, and consider them all to be a single conscious entity P.
P does not know which universe it is in (by definition) to start. It can change this by making an observation: it updates itself on sensory data. Regardless of which result is recorded, P(n+1) has lesser measure than P(n): P(n+1) occupies precisely half of the universes P does. P(n+1) has learned more about the universe it is in, so ist space of possible universes has diminished.
An example: consider the example of observing a fair coin—for example, observing the spin of an electron. All of P(n) runs the same algorithm: read the single bit corresponding to the spin, add bit to memory with a suitable wrapper: “Result of experiment: 0/1”. This is the new P(n+1), which regardless of result is a new entity. Let us designate successors to P(n) which observed a positive spin Q+, and those which observed a negative spin Q-. Since Q+ and Q- are not equal—they differ in one bit—they are not the same entity, even though they are both successors to (and are part of the same “character” as) P(n). Thus each of Q+/- observe only one version of the experiment.
As a lead-in to decision-making: consider what would happen if P(n) had precommitted to producing Q+, and never produced a Q-. Then the universe “Character P observes a negative spin” is inconsistent, and does not exist (barring, say, a random cosmic ray changing the algorithm.) Such a mind would never observe a spin-down event. This is distinct from quantum immortality/suicide—whereas a quantum suicide leaves behind a “world without you,” precommitting in this way means that a given world is inconsistent and never existed in the first place. Barring improbability, no successor of P(n) observes a spin-down event.
In this sense, we can define a decision as a “false observation.” P(n) decides to cause event E by choosing to only output successor functions in which event E is observed. (Note that this wording is excessively confusing; a brain which outputs a “move arm” signal is highly unlikely to be in a state where the arm does not move, and so can be said to have “decided” to move the arm.) A decision, then, as expected, also narrows the field of possible universes—but, at least hypothetically, in a purposeful manner.