That works if we have the counterfactuals/stories, but how do we determine what these should be? Assuming we reject modal realism, they don’t directly correspond to anything real, so what should they be?
In the case that the agent generates only one story, its not really a decision point but is rather reflexive action. We could design a bad agent that when faced with a genuine decision point would just reflex through it with some predecided action. So in my mind this is turning into a question of when it is proper to drop out of reflexisive action and go through multiple stories ie how we know we are at a decision point.
If legimate (bayesian) prediction would have significant probablity mass in the future for outcomes that are widely appart in wellfare then choices could matter. If the uncertainty is due to non-self the agent should maybe be anxious but should not start to decide. If the uncertainty is due to the state of the agents actuators then decision should start. Actuator can be taken in a wide sense where everything that is influencable by the agent is an actuator. Now there is a danger that modality is just retreating to influencability. However I think that close correlation between the core self and (potential) actuator can make this issue live in the past or present rather than the future. Maybe if the nerves to your arm have just now been cut you would mistakenly take your hand to be your actuator. But if the hand has until this point obeyed your will it is prudent to make this assumption althought the agent can’t actually know whether the decision will in fact be causally linked with the arm when the decision is carried out.
What your brain is causally coupled to is subject to correct and incorrect beliefs and this forms the basis of what options you have or don’t have. I guess some of the decision theories could have different rationale how and why it would be prudent to produce what stories (brain that ignores its causal linkages and just picks stories in lockstep with its copies could be more steerable to those that create the copies, but I guess even then percieving what is the state of the world is a kind of causality bind).
I guess some of the decision theories could have different rationale how and why it would be prudent to produce what stories
Yeah, my approach is to note that the stories are the result of intuitions and/or instincts that are evolved and so that they aren’t purely arbitrary.
Well it helps that concept is quite well defined but I think I was focusing on other aspect.
It seems to me that in a Trivial Decision Theory Problem, the list of stories is generated but then one of the stories hogs all the pros with everything else getting all the cons.
Whereras I was thinking of situations that the agent doesn’t percieve as having options ie that there is only one thinkable course of action (and whether this is due to impossibility or lack of imagination is not important).
Even in the “trivial analysis” of the examples for trivial decision problems there is a sense that 2 things are possible and a “screw this option” kind of though produced that refers to a course of events / action that is not undertaken. This makes it a “decision point”. Things that are not decisions do not involve referring to such representations. I have a little trouble coming up with a non-hardware examples but closest I have got is it being dark making humans sleepy (not sleep but have the night hormone levels). The human doesn’t decide to be sleepy but its a mechanism in it. Another candidate would be visual field processing. Its more like there is visual qualia popping to conciousness rather than there being decisions on “how I should see this?”, you don’t have to decide to see.
Anyway the point was that even if they are crappy sotries them being stories saves us form the ontological troubles. Another attempt at construing what is the apparent contradiction to explain.
T1: “I could take candy at T20”
T2: “I could take cake at T20”
T3: “I can’t take both candy and cake at the same time”
T4: “If I take candy my teeth will hurt at T25”
T5: “If I take candy stomach will hurt at T25”
...
T15: “Okay so take the candy”
...
T20: “Soon my teeth will hurt”
T21: “whoops I was supposed to pick up the candy”
The “equal modalities” view would say that its a problem that two mutually exclusive events are designated for T20. But them being stories emphasises that all of those thoughtstep are what is supposed to be singularly determined. T20 taking place is a different thing than having a thought with reference to T20 in it.
If you had a scheme like
T17: “I am hungry I need to eat”
T20: pick up candy
T20: pick up cake
T25: Feel pain in teeth
T25: Feel pain in stomach
And it actually takes place then we would in fact be in trouble with determinism if we are not picking up a superposition of candy+cake to eat. And it can seem like the previous planning is about such a world. But that ain’t how the classical world is at all. Exercising choice doesn’t branch you in that way. But its not like one of the stories is more priviledged than the rest bu rather the whole scheme being inapplicable.
It seems to me that in a Trivial Decision Theory Problem, the list of stories is generated but then one of the stories hogs all the pros with everything else getting all the cons.
That wasn’t how I defined it. I defined it as a decision theory problem with literally one option.
The “screw this” option is available when we don’t insist that an agent is actually in a situation, just that a situation be simulated.
I feel like I am having reading comprehension difficulties.
So the Triviality Perspective claims that you should one-box, but also that this is an incredibly boring claim that doesn’t provide much insight into decision theory.
This passage seems a lot like applying the concept to get an answer out of two option scenario.
If you accept the premise of a perfect predictor, then seeing $1 million in the transparent box implies that you were predicted to one-box which implies that you will one-box.
This seems to me to be that one option is deemed possible and the other impossible. Deeming an option impossible is a form of “screw this”. So this approach forms opinions of two counterfactuals.
A true situation that is so trivial its not a decision would be like. “You come across a box. You can take it.” If we complicate it even a little bit with “You come across a box. You can take it. You can leave it be.” its a non-trivial decision problem.
It might be baked in to the paradigm of providing a decision theory that it should process and opine about all the affordances available. In a given by hypothetical situation the affordances are magically fixed. But being in a real situation part of the cognition is responcible to turning the situation into a decision moment if it is warranted.
If you come across a fork in the road one agent might process it as a decisions problem “Do I go left or right?” and another might ask “Do I go north or south?”. The chopping of the situation into affordances might also be perspective relative, “Do I go left or right or turn back?” is a way to see three affordances in the same situation where another perspective would see two. An agent that just walks without pondering does not engage in deciding. The “question” of “how many affordances I should see in this situation” can be answered in a more functional manner and a less functional manner (your navigation might be greatly hampered if you can’t turn on roads).
The question of counterfactuals is placed before the problem is formulated and not after it.
That works if we have the counterfactuals/stories, but how do we determine what these should be? Assuming we reject modal realism, they don’t directly correspond to anything real, so what should they be?
In the case that the agent generates only one story, its not really a decision point but is rather reflexive action. We could design a bad agent that when faced with a genuine decision point would just reflex through it with some predecided action. So in my mind this is turning into a question of when it is proper to drop out of reflexisive action and go through multiple stories ie how we know we are at a decision point.
If legimate (bayesian) prediction would have significant probablity mass in the future for outcomes that are widely appart in wellfare then choices could matter. If the uncertainty is due to non-self the agent should maybe be anxious but should not start to decide. If the uncertainty is due to the state of the agents actuators then decision should start. Actuator can be taken in a wide sense where everything that is influencable by the agent is an actuator. Now there is a danger that modality is just retreating to influencability. However I think that close correlation between the core self and (potential) actuator can make this issue live in the past or present rather than the future. Maybe if the nerves to your arm have just now been cut you would mistakenly take your hand to be your actuator. But if the hand has until this point obeyed your will it is prudent to make this assumption althought the agent can’t actually know whether the decision will in fact be causally linked with the arm when the decision is carried out.
What your brain is causally coupled to is subject to correct and incorrect beliefs and this forms the basis of what options you have or don’t have. I guess some of the decision theories could have different rationale how and why it would be prudent to produce what stories (brain that ignores its causal linkages and just picks stories in lockstep with its copies could be more steerable to those that create the copies, but I guess even then percieving what is the state of the world is a kind of causality bind).
I use the term Trivial Decision Theory Problem to refer to circumstances when an agent can only make one decision.
Yeah, my approach is to note that the stories are the result of intuitions and/or instincts that are evolved and so that they aren’t purely arbitrary.
Well it helps that concept is quite well defined but I think I was focusing on other aspect.
It seems to me that in a Trivial Decision Theory Problem, the list of stories is generated but then one of the stories hogs all the pros with everything else getting all the cons.
Whereras I was thinking of situations that the agent doesn’t percieve as having options ie that there is only one thinkable course of action (and whether this is due to impossibility or lack of imagination is not important).
Even in the “trivial analysis” of the examples for trivial decision problems there is a sense that 2 things are possible and a “screw this option” kind of though produced that refers to a course of events / action that is not undertaken. This makes it a “decision point”. Things that are not decisions do not involve referring to such representations. I have a little trouble coming up with a non-hardware examples but closest I have got is it being dark making humans sleepy (not sleep but have the night hormone levels). The human doesn’t decide to be sleepy but its a mechanism in it. Another candidate would be visual field processing. Its more like there is visual qualia popping to conciousness rather than there being decisions on “how I should see this?”, you don’t have to decide to see.
Anyway the point was that even if they are crappy sotries them being stories saves us form the ontological troubles. Another attempt at construing what is the apparent contradiction to explain.
T1: “I could take candy at T20”
T2: “I could take cake at T20”
T3: “I can’t take both candy and cake at the same time”
T4: “If I take candy my teeth will hurt at T25”
T5: “If I take candy stomach will hurt at T25”
...
T15: “Okay so take the candy”
...
T20: “Soon my teeth will hurt”
T21: “whoops I was supposed to pick up the candy”
The “equal modalities” view would say that its a problem that two mutually exclusive events are designated for T20. But them being stories emphasises that all of those thoughtstep are what is supposed to be singularly determined. T20 taking place is a different thing than having a thought with reference to T20 in it.
If you had a scheme like
T17: “I am hungry I need to eat”
T20: pick up candy
T20: pick up cake
T25: Feel pain in teeth
T25: Feel pain in stomach
And it actually takes place then we would in fact be in trouble with determinism if we are not picking up a superposition of candy+cake to eat. And it can seem like the previous planning is about such a world. But that ain’t how the classical world is at all. Exercising choice doesn’t branch you in that way. But its not like one of the stories is more priviledged than the rest bu rather the whole scheme being inapplicable.
That wasn’t how I defined it. I defined it as a decision theory problem with literally one option.
The “screw this” option is available when we don’t insist that an agent is actually in a situation, just that a situation be simulated.
I feel like I am having reading comprehension difficulties.
This passage seems a lot like applying the concept to get an answer out of two option scenario.
This seems to me to be that one option is deemed possible and the other impossible. Deeming an option impossible is a form of “screw this”. So this approach forms opinions of two counterfactuals.
A true situation that is so trivial its not a decision would be like. “You come across a box. You can take it.” If we complicate it even a little bit with “You come across a box. You can take it. You can leave it be.” its a non-trivial decision problem.
It might be baked in to the paradigm of providing a decision theory that it should process and opine about all the affordances available. In a given by hypothetical situation the affordances are magically fixed. But being in a real situation part of the cognition is responcible to turning the situation into a decision moment if it is warranted.
If you come across a fork in the road one agent might process it as a decisions problem “Do I go left or right?” and another might ask “Do I go north or south?”. The chopping of the situation into affordances might also be perspective relative, “Do I go left or right or turn back?” is a way to see three affordances in the same situation where another perspective would see two. An agent that just walks without pondering does not engage in deciding. The “question” of “how many affordances I should see in this situation” can be answered in a more functional manner and a less functional manner (your navigation might be greatly hampered if you can’t turn on roads).
The question of counterfactuals is placed before the problem is formulated and not after it.