I would also be interested in the algorithms for making decisions if I believed we were agents with free will, freedom of choice, ability to affect the world (in the model where the world is external reality) and so on.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
I am confused about the low/high-fidelity.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
Why would it be difficult?
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
I do think that any particular microstate can only lead to one macrostate.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
In other words. usefulness (which DT to use) depends on truth (Which world model to use).
My guess is that you, in practice, actually are interested in finding decision-relevant information and relevant advice, in everyday decisions that you make. I could be wrong but that seems really unlikely.
Re microstates/macrostates: it seems like we mostly agree about microstates/macrostates. I do think that any particular microstate can only lead to one macrostate.
By “low-fidelity” I mean the description of each possible world doesn’t contain a complete description of the possible worlds that the other agent enumerates. (This actually has to be the case in single-person problems too, otherwise each possible world would have to contain a description of every other possible world)
An issue with imagining a possible world where 1+1=3 is that it’s not clear in what order to make logical inferences. If you make a certain sequence of logical inferences with the axiom 1+1=3, then you get 2=1+1=3; if you make a difference sequence of inferences, then you get 2=1+1=(1+1-1)+(1+1-1)=(3-1)+(3-1)=4. (It seems pretty likely to me that, for this reason, logic is not the right setting in which to formalize logically impossible counterfactuals, and taking counterfactuals on logical statements is confused in one way or another)
If we fix a particular mental model of this world, then we can answer questions about this model; part of the decision theory problem is deciding what the mental model of this world should be, and that is pretty unclear.
Yes, if course I do, I cannot help it. But just because we do something doesn’t mean we have the free will to either do or not do it.
Right, I cannot imagine it being otherwise, and that is where my beef with “agents have freedom of choice” is.
Since possible worlds are in the observer’s mind (obviously, since math is a mental construction to begin with, no matter how much people keeps arguing whether mathematical laws are invented or discovered), different people may make a suboptimal inference in different places. We call those “mistakes”. Most times people don’t explicitly use axioms, though sometimes they do. Some axioms are more useful than others, of course. Starting with 1+1=3 in addition to the usual remaining set, we can prove that all numbers are equal. Or maybe we end up with a mathematical model where adding odd numbers only leads to odd numbers. In that sense, not knowing more about the world, we are indeed in a “low-fidelity” situation, with many possible (micro-)worlds where 1+1=3 is an axiom. Some of these worlds might even have a useful description of observations (imagine, for example, one where each couple requires a chaperone, there 1+1 is literally 3).
In other words. usefulness (which DT to use) depends on truth (Which world model to use).