Time is also crucial for thinking about agency. My best short-phrase definition of agency is that agency is time travel. An agent is a mechanism through which the future is able to affect the past.
Yeahssss...except that there is no need to take that literally...agents build models of future states and act on them, even though they are approximate. If agents could act on the actual future, no one would be killed in an accident, or lose on the stock market.
The primary confusing thing about Newcomb’s problem is that we want to think of our decision as coming “before” the filling of the boxes, in spite of the fact that it physically comes after.
Newcomb’s problem isn’t a fact...it’s not an empirical problem to be solved.
You should not be inferring “how time works” from it.
This is hinting that maybe we want to understand some other “logical” time in addition to the time of physics
Ok...
Maybe the solution here is to think of there being many different types of “before” and “after,” “cause” and “effect,” etc. For example, we could say that X is before Y from an agent-first perspective, but Y is before X from a physics-first perspective
If an agent is complex enough to build multiple models , or simulations, they can run some virtual time forwards or backwards, or whatever....in a virtual sense, within the simulation. The important point is that a realistic agent loses information every time they go up another level of virtuality.
Newcomb’s problem isn’t a fact...it’s not an empirical problem to be solved. You should not be inferring “how time works” from it.
Yes, it’s not empirical (currently). It’s a thought experiment, which is mentioned a lot because it’s counter-intuitive. Arguably ‘being counter intuitive’ is an indicator that it is unlikely, i.e. obtaining enough information to create such a simulation and pay the energy cost to run it, and finish the computation before the simulated person dies is hard.
If an agent is complex enough to build multiple models , or simulations, they can run some virtual time forwards or backwards, or whatever....in a virtual sense, within the simulation. The important point is that a realistic agent loses information every time they go up another level of virtuality.
A simulation is level 1, a simulation of a simulation is level 2, etc.
obtaining enough information to create such a simulation and pay the energy cost to run it, and finish the computation before the simulated person dies is hard
For once, computational complexity isn’t the main problem. The main problem is that to mechanise Newcomb, you still need to make assumptions about time and causality...so a mechanisation of Newcomb is not going to tell you anything new about time and causality, only echo the assumptions it’s based on.
But time and causality are worth explaining because we have evidence of them.
Yeahssss...except that there is no need to take that literally...agents build models of future states and act on them, even though they are approximate. If agents could act on the actual future, no one would be killed in an accident, or lose on the stock market.
Newcomb’s problem isn’t a fact...it’s not an empirical problem to be solved. You should not be inferring “how time works” from it.
Ok...
If an agent is complex enough to build multiple models , or simulations, they can run some virtual time forwards or backwards, or whatever....in a virtual sense, within the simulation. The important point is that a realistic agent loses information every time they go up another level of virtuality.
Yes, it’s not empirical (currently). It’s a thought experiment, which is mentioned a lot because it’s counter-intuitive. Arguably ‘being counter intuitive’ is an indicator that it is unlikely, i.e. obtaining enough information to create such a simulation and pay the energy cost to run it, and finish the computation before the simulated person dies is hard.
What is a level of virtuality?
A simulation is level 1, a simulation of a simulation is level 2, etc.
For once, computational complexity isn’t the main problem. The main problem is that to mechanise Newcomb, you still need to make assumptions about time and causality...so a mechanisation of Newcomb is not going to tell you anything new about time and causality, only echo the assumptions it’s based on.
But time and causality are worth explaining because we have evidence of them.