OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Re top down causation: I am curious what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other. Say that at the low level, the state type is L and the transition function is tl:L→L. At the high level, the state type is H and the nondeterministic transition function is th:H→Set(H), i.e. at a high-level sometimes you don’t know what state things will end up in. Say we have some function f:L→H for mapping low-level states to high-level states, so each low-level state corresponds to a single high-level state, but a single high-level state may correspond to multiple low-level states.
Given these definitions, we could say that the high and low level ontologies are compatible if, for each low level state l, it is the case that f(tl(l))∈th(f(l)), i.e. the high-level ontology’s prediction for the next high-level state is consistent with the predicted next high-level state according to the low-level ontology and f.
Causation here is parallel and symmetrical rather than top-down: both the high level and the low level obey causal laws, and there is no causation from the high level to the low level. In cases where things can be made consistent like this, I’m pretty comfortable saying that the high-level states are “real” in an important sense, and that high-level states can have other high-level states as a cause.
EDIT: regarding more minor points: Thanks for the explanation of the multi-agent games; that makes sense although in this case the enumerated worlds are fairly low-fidelity, and making them higher-fidelity might lead to infinite loops. In counterfactual mugging, you have to be able to enumerate both the world where the 1000th digit of pi is even and where the 1000th digit of pi is odd, and if you are doing logical inference on each of these worlds then that might be hard; consider the difficulty of imagining a possible world where 1+1=3.
OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Right. I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other.
Absolutely, once you replace “true” with “useful” :) We can have multiple models at different levels that make accurate predictions of future observations. I assume that in your notation tl:L→L is an endomorphism within a set of microstates L, and th:H→Set(H) is a map from a macrostate type H, (what would be an example of this state type?) to a wider set of macrostates (like what?). I am guessing that this may match up with the standard definitions of microstates and macrostates in statistical mechanics, and possibly some kind of a statistical ensemble? Anyway, so your statement is that of emergence: the evolution of microstates maps into an evolution of macrostates, sort of like the laws of statistical mechanics map into the laws of thermodynamics. In physics it is known as an effective theory. If so, I have no issue with that. Certainly one can call, say, gas compression by an external force as a cause of it absorbing mechanical energy and heating up. In the same sense, one can talk about emergent laws of human behavior, where a decision by an agent is a cause of change in the world the agent inhabits. So, a decision theory is an emergent effective theory where we don’t try to go to the level of states L, be those at the level of single neurons, neuronal electrochemistry, ion channels opening and closing according to some quantum chemistry and atomic physics, or even lower. This seems to be a flavor of compatibilism.
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
in this case the enumerated worlds are fairly low-fidelity
I am confused about the low/high-fidelity. In what way what I suggested is low-fidelity? What is missing from the picture?
consider the difficulty of imagining a possible world where 1+1=3.
Why would it be difficult? A possible world is about the observer’s mental model, and most models do not map neatly into any L or H that matches known physical laws. Most magical thinking is like that (e.g. faith, OCD, free will).
I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
I am confused about the low/high-fidelity.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
Why would it be difficult?
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
I do think that any particular microstate can only lead to one macrostate.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
In other words. usefulness (which DT to use) depends on truth (Which world model to use).
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
If there is indeterminism at the micro level , there is not the slightest doubt that it can be amplified to the macro level, because quantum mechanics as an experimental science depends on the ability to make macroscopic records of events involving single particles.
Amplifying microscopic indeterminism is definitely a thing. It doesn’t help the free choice argument though, since the observer is not the one making the choice, the underlying quantum mechanics does.
Macroscopic indeterminism is sufficient to establish real, not merely logical, counterfactuals.
Besides that, It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
But you’re in good company: Sam Harris is similarly confused.
But you’re in good company: Sam Harris is similarly confused.
Not condescending in the least :P
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
You need to argue for that claim, not just state it. The contrary claim is supported by a simple argument: if an even is indeterministic, it need not have happened, or need not have happened that way. Therefore, there is a real possibility that it did not happened, or happened differently—and that is a real counterfactual.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
if an even is indeterministic, it need not have happened, or need not have happened that way
There is no such thing as “need” in Physics. There are physical laws, deterministic or probabilistic, and that’s it. “Need” is a human concept that has no physical counterpart. Your “simple argument” is an emotional reaction.
Your comment has no relevance, because probablistic laws automatically imply counterfactuals as well. In fact it’s just another way of saying the same thing. I could have shown it in modal logic, too.
OK, I misinterpreted you as recommending a way of making decisions. It seems that we are interested in different problems (as I am trying to find algorithms for making decisions that have good performance in a variety of possible problems).
Re top down causation: I am curious what you think of a view where there are both high and low level descriptions that can be true at the same time, and have their own parallel causalities that are consistent with each other. Say that at the low level, the state type is L and the transition function is tl:L→L. At the high level, the state type is H and the nondeterministic transition function is th:H→Set(H), i.e. at a high-level sometimes you don’t know what state things will end up in. Say we have some function f:L→H for mapping low-level states to high-level states, so each low-level state corresponds to a single high-level state, but a single high-level state may correspond to multiple low-level states.
Given these definitions, we could say that the high and low level ontologies are compatible if, for each low level state l, it is the case that f(tl(l))∈th(f(l)), i.e. the high-level ontology’s prediction for the next high-level state is consistent with the predicted next high-level state according to the low-level ontology and f.
Causation here is parallel and symmetrical rather than top-down: both the high level and the low level obey causal laws, and there is no causation from the high level to the low level. In cases where things can be made consistent like this, I’m pretty comfortable saying that the high-level states are “real” in an important sense, and that high-level states can have other high-level states as a cause.
EDIT: regarding more minor points: Thanks for the explanation of the multi-agent games; that makes sense although in this case the enumerated worlds are fairly low-fidelity, and making them higher-fidelity might lead to infinite loops. In counterfactual mugging, you have to be able to enumerate both the world where the 1000th digit of pi is even and where the 1000th digit of pi is odd, and if you are doing logical inference on each of these worlds then that might be hard; consider the difficulty of imagining a possible world where 1+1=3.
Right. I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
Absolutely, once you replace “true” with “useful” :) We can have multiple models at different levels that make accurate predictions of future observations. I assume that in your notation tl:L→L is an endomorphism within a set of microstates L, and th:H→Set(H) is a map from a macrostate type H, (what would be an example of this state type?) to a wider set of macrostates (like what?). I am guessing that this may match up with the standard definitions of microstates and macrostates in statistical mechanics, and possibly some kind of a statistical ensemble? Anyway, so your statement is that of emergence: the evolution of microstates maps into an evolution of macrostates, sort of like the laws of statistical mechanics map into the laws of thermodynamics. In physics it is known as an effective theory. If so, I have no issue with that. Certainly one can call, say, gas compression by an external force as a cause of it absorbing mechanical energy and heating up. In the same sense, one can talk about emergent laws of human behavior, where a decision by an agent is a cause of change in the world the agent inhabits. So, a decision theory is an emergent effective theory where we don’t try to go to the level of states L, be those at the level of single neurons, neuronal electrochemistry, ion channels opening and closing according to some quantum chemistry and atomic physics, or even lower. This seems to be a flavor of compatibilism.
What I have an issue with is the apparent break of the L→H mapping when one postulates top-down causation, like free choice, i.e. multiple different H’s reachable from the same microstate.
I am confused about the low/high-fidelity. In what way what I suggested is low-fidelity? What is missing from the picture?
Why would it be difficult? A possible world is about the observer’s mental model, and most models do not map neatly into any L or H that matches known physical laws. Most magical thinking is like that (e.g. faith, OCD, free will).
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
In other words. usefulness (which DT to use) depends on truth (Which world model to use).
If there is indeterminism at the micro level , there is not the slightest doubt that it can be amplified to the macro level, because quantum mechanics as an experimental science depends on the ability to make macroscopic records of events involving single particles.
Amplifying microscopic indeterminism is definitely a thing. It doesn’t help the free choice argument though, since the observer is not the one making the choice, the underlying quantum mechanics does.
Macroscopic indeterminism is sufficient to establish real, not merely logical, counterfactuals.
Besides that, It would be helpful to separate the ideas of dualism , agency and free choice. If the person making the decision is not some ghost in the machine, then they the only thing they can be is the machine, as a total system,. In that case, the question becomes the question of whether the system as a whole can choose, could have chosen otherwise, etc.
But you’re in good company: Sam Harris is similarly confused.
Not condescending in the least :P
There are no “real” counterfactuals, only the models in the observer’s mind, some eventually proven better reflecting observations than others.
It would be helpful, yes, if they were separable. Free choice as anything other than illusionism is tantamount to dualism.
You need to argue for that claim, not just state it. The contrary claim is supported by a simple argument: if an even is indeterministic, it need not have happened, or need not have happened that way. Therefore, there is a real possibility that it did not happened, or happened differently—and that is a real counterfactual.
You need to argue for that claim as well.
There is no such thing as “need” in Physics. There are physical laws, deterministic or probabilistic, and that’s it. “Need” is a human concept that has no physical counterpart. Your “simple argument” is an emotional reaction.
Your comment has no relevance, because probablistic laws automatically imply counterfactuals as well. In fact it’s just another way of saying the same thing. I could have shown it in modal logic, too.
Well, we have reached an impasse. Goodbye.