Those are excellent questions! Thank you for actually asking them, instead of simply stating something like “What you wrote is wrong because...”
Let me try to have a crack at them, without claiming that “I have solved decision theory, everyone can go home now!”
How do you construct “policy counterfactuals”, e.g. worlds where “I am the type of person who one-boxes” and “I am the type of person who two-boxes”? (This isn’t a problem if the environment is already specified as a function from the agent’s policy to outcome, but that often isn’t how things work in the real world)
“I am a one-boxer” and “I am a two-boxer” are both possible worlds, and by watching yourself work through the problem you learn in which world you live. Maybe I misunderstand what you are saying though.
How do you integrate this with logical uncertainty, such that you can e.g. construct “possible worlds” where the 1000th digit of pi is 2 (when in fact it isn’t)? If you don’t do this then you get wrong answers on versions of these problems that use logical pseudorandomness rather than physical randomness.
As of this moment, both are possible worlds for me. If I were to look up or calculate the 1000th digit of Pi, I would learn a bit more about the world I am in. Not including the lower-probability worlds like having calculating the result wrongly and so on. Or I might choose not to look it up, and both worlds would remain possible until and unless I gain, intentionally or accidentally (there is no difference, intentions and accidents are not a physical thing, but a human abstraction at the level of intentional stance), some knowledge about the burning question of the 1000th digit of Pi.
Can you give an example of a problem “that uses logical pseudorandomness” where simply enumerating worlds would give a wrong answer?
How does this behave in multi-agent problems, with other versions of itself that have different utility functions? Naively both agents would try to diagonalize against each other, and an infinite loop would result.
I am not sure in what way an agent that has a different utility function is at all yourself. An example would be good. My guess is that you might be referring to a Nash equilibrium that is a mixed strategy, but maybe I am wrong.
“I am a one-boxer” and “I am a two-boxer” are both possible worlds, and by watching yourself work through the problem you learn in which world you live. Maybe I misunderstand what you are saying though.
The interesting formal question here is: given a description of the world you are in (like the descriptions in this post), how do you enumerate the possible worlds? A solution to this problem would be very useful for decision theory.
If an agent knows its source code, then “I am a one-boxer” and “I am a two-boxer” could be taken to refer to currently-unknown logical facts about what its source code outputs. You could be proposing a decision theory whereby the agent uses some method for reasoning about logical uncertainty (such as enumerating logical worlds), and selects the action such that its expected utility is highest conditional on the event that its source code outputs this action. (I am not actually sure exactly what you are proposing, this is just a guess).
If the logical uncertainty is represented by a logical inductor, then this decision theory is called “LIEDT” (logical inductor EDT) at MIRI, and it has a few problems, as explained in this post. First, logical inductors have undefined behavior when conditioning on very rare events (this is similar to the cosmic ray problem). Second, it isn’t updateless in the right way (see the reply to the next point for more on this problem).
I’m not claiming that it’s impossible to solve the problems by world-enumeration, just that formally specifying the world-enumeration procedure is an open problem.
Can you give an example of a problem “that uses logical pseudorandomness” where simply enumerating worlds would give a wrong answer?
Say you’re being counterfactually mugged based on the 1000th digit of pi. Omega, before knowing the 1000th digit of pi, predicts whether you would pay up if the 1000th digit of pi is odd (note: it’s actually even), and rewards you if the digit is even. You now know that the digit is odd and are considering paying up.
Since you know the 1000th digit, you know the world where the 1000th digit of pi is even is impossible. A dumber version of you could consider the 1000th digit of pi to be uncertain, but does this dumber version of you have enough computational ability to analyze the problem properly and come to the right answer? How does this dumber version reason correctly about the problem while never finding out the value of the 1000th digit of pi? Again, I’m not claiming this is impossible, just that it’s an open problem.
I am not sure in what way an agent that has a different utility function is at all yourself. An example would be good.
Consider the following normal-form game. Each of 2 players selects an action, 0 or 1. Call their actions x1 and x2. Now player 1 gets utility 9*x2-x1, and player 2 gets utility 10*x1-x2. (This is an asymmetric variant of prisoner’s dilemma; I’m making it asymmetric on purpose to avoid a trivial solution)
Call your decision theory WEDT (“world-enumeration decision theory”). What happens when two WEDT agents play this game with each other? They have different utility functions but the same decision theory. If both try to enumerate worlds, then they end up in an infinite loop (player 1 is thinking about what happens if they select action 0, which requires simulating player 2, but that causes player 2 to think about what happens if they select action 0, which requires simulating player 1, etc).
Thank you for your patience explaining the current leading edge and answering my questions! Let me try to see if my understanding of what you are saying makes sense.
If an agent knows its source code, then “I am a one-boxer” and “I am a two-boxer” could be taken to refer to currently-unknown logical facts about what its source code outputs.
By “source code” I assume you mean the algorithm that completely determines the agent’s actions for a known set of inputs, though maybe calculating these actions is expensive, hence some of them could be “currently unknown” until the algorithm is either analyzed or simulated.
Let me try to address your points in the reverse order.
Consider the following normal-form game. Each of 2 players selects an action, 0 or 1. Call their actions x1 and x2. Now player 1 gets utility 9*x2-x1, and player 2 gets utility 10*x1-x2.
...
If both try to enumerate worlds, then they end up in an infinite loop
Enumerating does not require simulating. It is descriptive, not prescriptive. So there are 4 possible worlds, 00, 01, 10 and 11, with rewards for player 1 being 0, 9, −1, 8, and for player 2 being 0, −1, 10, 9. But to assign prior probabilities to these worlds, we need to discover more about the players. For pure strategy players some of these worlds will be probability 1 and others 0. For mixed strategy players things get slightly more interesting, since the worlds are parameterized by probability:
Let’s suppose that player 1 picks each world with probabilities p and 1-p and player 2 with probabilities q and 1-q. Then the probabilities of each world are pq, p(1-q), (1-p)q and (1-p)(1-q). Then the expected utility for each world is for player 1: 0, 9p(1-q), -(1-p)q, 8(1-p)(1-q), and for player 2 0, -p(1-q), 10(1-p)q, 9(1-p)(1-q). Out of the infinitely many possible worlds there will be one with the Nash equilibrium, where each player is indifferent to which decision the other player ends up making. This is, again, purely descriptive. By learning more about what strategy the agents use, we can evaluate the expected utility for each one, and, after the game is played, whether once or repeatedly, learn more about the world the players live in. The question you posed
What happens when two WEDT agents play this game with each other?
is in tension with the whole idea of agents not being able to affect the world, only being able to learn about the world it lives in. There is no such thing as a WEDT agent. If one of the players is the type that does the analysis and picks the mixed strategy with the Nash equilibrium, they maximize their expected utility, regardless of what that type of an agent the other player is.
About counterfactual mugging:
Say you’re being counterfactually mugged based on the 1000th digit of pi. Omega, before knowing the 1000th digit of pi, predicts whether you would pay up if the 1000th digit of pi is odd (note: it’s actually even), and rewards you if the digit is even. You now know that the digit is odd and are considering paying up.
Since you know the 1000th digit, you know the world where the 1000th digit of pi is even is impossible.
I am missing something… The whole setup is unclear. Counterfactual mugging is a trivial problem in terms of world enumeration, an agent who does not pay lives in the world where she has higher utility. It does not matter what Omega says or does, or what the 1000th digit of pi is.
You could be proposing a decision theory whereby the agent uses some method for reasoning about logical uncertainty (such as enumerating logical worlds), and selects the action such that its expected utility is highest conditional on the event that its source code outputs this action. (I am not actually sure exactly what you are proposing, this is just a guess).
Maybe this is where the inferential gap lies? I am not proposing a decision theory. Ability to make decisions requires freedom of choice, magically affecting the world through unphysical top-down causation. I am simply observing which of the many possible worlds has what utility for a given observer.
OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Re top down causation: I am curious what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other. Say that at the low level, the state type is L and the transition function is tl:L→L. At the high level, the state type is H and the nondeterministic transition function is th:H→Set(H), i.e. at a high-level sometimes you don’t know what state things will end up in. Say we have some function f:L→H for mapping low-level states to high-level states, so each low-level state corresponds to a single high-level state, but a single high-level state may correspond to multiple low-level states.
Given these definitions, we could say that the high and low level ontologies are compatible if, for each low level state l, it is the case that f(tl(l))∈th(f(l)), i.e. the high-level ontology’s prediction for the next high-level state is consistent with the predicted next high-level state according to the low-level ontology and f.
Causation here is parallel and symmetrical rather than top-down: both the high level and the low level obey causal laws, and there is no causation from the high level to the low level. In cases where things can be made consistent like this, I’m pretty comfortable saying that the high-level states are “real” in an important sense, and that high-level states can have other high-level states as a cause.
EDIT: regarding more minor points: Thanks for the explanation of the multi-agent games; that makes sense although in this case the enumerated worlds are fairly low-fidelity, and making them higher-fidelity might lead to infinite loops. In counterfactual mugging, you have to be able to enumerate both the world where the 1000th digit of pi is even and where the 1000th digit of pi is odd, and if you are doing logical inference on each of these worlds then that might be hard; consider the difficulty of imagining a possible world where 1+1=3.
OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Right. I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other.
Absolutely, once you replace “true” with “useful” :) We can have multiple models at different levels that make accurate predictions of future observations. I assume that in your notation tl:L→L is an endomorphism within a set of microstates L, and th:H→Set(H) is a map from a macrostate type H, (what would be an example of this state type?) to a wider set of macrostates (like what?). I am guessing that this may match up with the standard definitions of microstates and macrostates in statistical mechanics, and possibly some kind of a statistical ensemble? Anyway, so your statement is that of emergence: the evolution of microstates maps into an evolution of macrostates, sort of like the laws of statistical mechanics map into the laws of thermodynamics. In physics it is known as an effective theory. If so, I have no issue with that. Certainly one can call, say, gas compression by an external force as a cause of it absorbing mechanical energy and heating up. In the same sense, one can talk about emergent laws of human behavior, where a decision by an agent is a cause of change in the world the agent inhabits. So, a decision theory is an emergent effective theory where we don’t try to go to the level of states L, be those at the level of single neurons, neuronal electrochemistry, ion channels opening and closing according to some quantum chemistry and atomic physics, or even lower. This seems to be a flavor of compatibilism.
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
in this case the enumerated worlds are fairly low-fidelity
I am confused about the low/high-fidelity. In what way what I suggested is low-fidelity? What is missing from the picture?
consider the difficulty of imagining a possible world where 1+1=3.
Why would it be difficult? A possible world is about the observer’s mental model, and most models do not map neatly into any L or H that matches known physical laws. Most magical thinking is like that (e.g. faith, OCD, free will).
I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
I am confused about the low/high-fidelity.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
Why would it be difficult?
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
I do think that any particular microstate can only lead to one macrostate.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
In other words. usefulness (which DT to use) depends on truth (Which world model to use).
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
If there is indeterminism at the micro level , there is not the slightest doubt that it can be amplified to the macro level, because quantum mechanics as an experimental science depends on the ability to make macroscopic records of events involving single particles.
Amplifying microscopic indeterminism is definitely a thing. It doesn’t help the free choice argument though, since the observer is not the one making the choice, the underlying quantum mechanics does.
Macroscopic indeterminism is sufficient to establish real, not merely logical, counterfactuals.
Besides that, It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
But you’re in good company: Sam Harris is similarly confused.
But you’re in good company: Sam Harris is similarly confused.
Not condescending in the least :P
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
You need to argue for that claim, not just state it. The contrary claim is supported by a simple argument: if an even is indeterministic, it need not have happened, or need not have happened that way. Therefore, there is a real possibility that it did not happened, or happened differently—and that is a real counterfactual.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
if an even is indeterministic, it need not have happened, or need not have happened that way
There is no such thing as “need” in Physics. There are physical laws, deterministic or probabilistic, and that’s it. “Need” is a human concept that has no physical counterpart. Your “simple argument” is an emotional reaction.
Your comment has no relevance, because probablistic laws automatically imply counterfactuals as well. In fact it’s just another way of saying the same thing. I could have shown it in modal logic, too.
Those are excellent questions! Thank you for actually asking them, instead of simply stating something like “What you wrote is wrong because...”
Let me try to have a crack at them, without claiming that “I have solved decision theory, everyone can go home now!”
“I am a one-boxer” and “I am a two-boxer” are both possible worlds, and by watching yourself work through the problem you learn in which world you live. Maybe I misunderstand what you are saying though.
As of this moment, both are possible worlds for me. If I were to look up or calculate the 1000th digit of Pi, I would learn a bit more about the world I am in. Not including the lower-probability worlds like having calculating the result wrongly and so on. Or I might choose not to look it up, and both worlds would remain possible until and unless I gain, intentionally or accidentally (there is no difference, intentions and accidents are not a physical thing, but a human abstraction at the level of intentional stance), some knowledge about the burning question of the 1000th digit of Pi.
Can you give an example of a problem “that uses logical pseudorandomness” where simply enumerating worlds would give a wrong answer?
I am not sure in what way an agent that has a different utility function is at all yourself. An example would be good. My guess is that you might be referring to a Nash equilibrium that is a mixed strategy, but maybe I am wrong.
The interesting formal question here is: given a description of the world you are in (like the descriptions in this post), how do you enumerate the possible worlds? A solution to this problem would be very useful for decision theory.
If an agent knows its source code, then “I am a one-boxer” and “I am a two-boxer” could be taken to refer to currently-unknown logical facts about what its source code outputs. You could be proposing a decision theory whereby the agent uses some method for reasoning about logical uncertainty (such as enumerating logical worlds), and selects the action such that its expected utility is highest conditional on the event that its source code outputs this action. (I am not actually sure exactly what you are proposing, this is just a guess).
If the logical uncertainty is represented by a logical inductor, then this decision theory is called “LIEDT” (logical inductor EDT) at MIRI, and it has a few problems, as explained in this post. First, logical inductors have undefined behavior when conditioning on very rare events (this is similar to the cosmic ray problem). Second, it isn’t updateless in the right way (see the reply to the next point for more on this problem).
I’m not claiming that it’s impossible to solve the problems by world-enumeration, just that formally specifying the world-enumeration procedure is an open problem.
Say you’re being counterfactually mugged based on the 1000th digit of pi. Omega, before knowing the 1000th digit of pi, predicts whether you would pay up if the 1000th digit of pi is odd (note: it’s actually even), and rewards you if the digit is even. You now know that the digit is odd and are considering paying up.
Since you know the 1000th digit, you know the world where the 1000th digit of pi is even is impossible. A dumber version of you could consider the 1000th digit of pi to be uncertain, but does this dumber version of you have enough computational ability to analyze the problem properly and come to the right answer? How does this dumber version reason correctly about the problem while never finding out the value of the 1000th digit of pi? Again, I’m not claiming this is impossible, just that it’s an open problem.
Consider the following normal-form game. Each of 2 players selects an action, 0 or 1. Call their actions x1 and x2. Now player 1 gets utility 9*x2-x1, and player 2 gets utility 10*x1-x2. (This is an asymmetric variant of prisoner’s dilemma; I’m making it asymmetric on purpose to avoid a trivial solution)
Call your decision theory WEDT (“world-enumeration decision theory”). What happens when two WEDT agents play this game with each other? They have different utility functions but the same decision theory. If both try to enumerate worlds, then they end up in an infinite loop (player 1 is thinking about what happens if they select action 0, which requires simulating player 2, but that causes player 2 to think about what happens if they select action 0, which requires simulating player 1, etc).
Thank you for your patience explaining the current leading edge and answering my questions! Let me try to see if my understanding of what you are saying makes sense.
By “source code” I assume you mean the algorithm that completely determines the agent’s actions for a known set of inputs, though maybe calculating these actions is expensive, hence some of them could be “currently unknown” until the algorithm is either analyzed or simulated.
Let me try to address your points in the reverse order.
...
Enumerating does not require simulating. It is descriptive, not prescriptive. So there are 4 possible worlds, 00, 01, 10 and 11, with rewards for player 1 being 0, 9, −1, 8, and for player 2 being 0, −1, 10, 9. But to assign prior probabilities to these worlds, we need to discover more about the players. For pure strategy players some of these worlds will be probability 1 and others 0. For mixed strategy players things get slightly more interesting, since the worlds are parameterized by probability:
Let’s suppose that player 1 picks each world with probabilities p and 1-p and player 2 with probabilities q and 1-q. Then the probabilities of each world are pq, p(1-q), (1-p)q and (1-p)(1-q). Then the expected utility for each world is for player 1: 0, 9p(1-q), -(1-p)q, 8(1-p)(1-q), and for player 2 0, -p(1-q), 10(1-p)q, 9(1-p)(1-q). Out of the infinitely many possible worlds there will be one with the Nash equilibrium, where each player is indifferent to which decision the other player ends up making. This is, again, purely descriptive. By learning more about what strategy the agents use, we can evaluate the expected utility for each one, and, after the game is played, whether once or repeatedly, learn more about the world the players live in. The question you posed
is in tension with the whole idea of agents not being able to affect the world, only being able to learn about the world it lives in. There is no such thing as a WEDT agent. If one of the players is the type that does the analysis and picks the mixed strategy with the Nash equilibrium, they maximize their expected utility, regardless of what that type of an agent the other player is.
About counterfactual mugging:
I am missing something… The whole setup is unclear. Counterfactual mugging is a trivial problem in terms of world enumeration, an agent who does not pay lives in the world where she has higher utility. It does not matter what Omega says or does, or what the 1000th digit of pi is.
Maybe this is where the inferential gap lies? I am not proposing a decision theory. Ability to make decisions requires freedom of choice, magically affecting the world through unphysical top-down causation. I am simply observing which of the many possible worlds has what utility for a given observer.
OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Re top down causation: I am curious what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other. Say that at the low level, the state type is L and the transition function is tl:L→L. At the high level, the state type is H and the nondeterministic transition function is th:H→Set(H), i.e. at a high-level sometimes you don’t know what state things will end up in. Say we have some function f:L→H for mapping low-level states to high-level states, so each low-level state corresponds to a single high-level state, but a single high-level state may correspond to multiple low-level states.
Given these definitions, we could say that the high and low level ontologies are compatible if, for each low level state l, it is the case that f(tl(l))∈th(f(l)), i.e. the high-level ontology’s prediction for the next high-level state is consistent with the predicted next high-level state according to the low-level ontology and f.
Causation here is parallel and symmetrical rather than top-down: both the high level and the low level obey causal laws, and there is no causation from the high level to the low level. In cases where things can be made consistent like this, I’m pretty comfortable saying that the high-level states are “real” in an important sense, and that high-level states can have other high-level states as a cause.
EDIT: regarding more minor points: Thanks for the explanation of the multi-agent games; that makes sense although in this case the enumerated worlds are fairly low-fidelity, and making them higher-fidelity might lead to infinite loops. In counterfactual mugging, you have to be able to enumerate both the world where the 1000th digit of pi is even and where the 1000th digit of pi is odd, and if you are doing logical inference on each of these worlds then that might be hard; consider the difficulty of imagining a possible world where 1+1=3.
Right. I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
Absolutely, once you replace “true” with “useful” :) We can have multiple models at different levels that make accurate predictions of future observations. I assume that in your notation tl:L→L is an endomorphism within a set of microstates L, and th:H→Set(H) is a map from a macrostate type H, (what would be an example of this state type?) to a wider set of macrostates (like what?). I am guessing that this may match up with the standard definitions of microstates and macrostates in statistical mechanics, and possibly some kind of a statistical ensemble? Anyway, so your statement is that of emergence: the evolution of microstates maps into an evolution of macrostates, sort of like the laws of statistical mechanics map into the laws of thermodynamics. In physics it is known as an effective theory. If so, I have no issue with that. Certainly one can call, say, gas compression by an external force as a cause of it absorbing mechanical energy and heating up. In the same sense, one can talk about emergent laws of human behavior, where a decision by an agent is a cause of change in the world the agent inhabits. So, a decision theory is an emergent effective theory where we don’t try to go to the level of states L, be those at the level of single neurons, neuronal electrochemistry, ion channels opening and closing according to some quantum chemistry and atomic physics, or even lower. This seems to be a flavor of compatibilism.
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
I am confused about the low/high-fidelity. In what way what I suggested is low-fidelity? What is missing from the picture?
Why would it be difficult? A possible world is about the observer’s mental model, and most models do not map neatly into any L or H that matches known physical laws. Most magical thinking is like that (e.g. faith, OCD, free will).
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
In other words. usefulness (which DT to use) depends on truth (Which world model to use).
If there is indeterminism at the micro level , there is not the slightest doubt that it can be amplified to the macro level, because quantum mechanics as an experimental science depends on the ability to make macroscopic records of events involving single particles.
Amplifying microscopic indeterminism is definitely a thing. It doesn’t help the free choice argument though, since the observer is not the one making the choice, the underlying quantum mechanics does.
Macroscopic indeterminism is sufficient to establish real, not merely logical, counterfactuals.
Besides that, It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
But you’re in good company: Sam Harris is similarly confused.
Not condescending in the least :P
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
You need to argue for that claim, not just state it. The contrary claim is supported by a simple argument: if an even is indeterministic, it need not have happened, or need not have happened that way. Therefore, there is a real possibility that it did not happened, or happened differently—and that is a real counterfactual.
You need to argue for that claim as well.
There is no such thing as “need” in Physics. There are physical laws, deterministic or probabilistic, and that’s it. “Need” is a human concept that has no physical counterpart. Your “simple argument” is an emotional reaction.
Your comment has no relevance, because probablistic laws automatically imply counterfactuals as well. In fact it’s just another way of saying the same thing. I could have shown it in modal logic, too.
Well, we have reached an impasse. Goodbye.