The property you talk about the universe having is an interesting one, but I don’t think causality is the right word for it. You’ve smuggled an extra component into the definition: each node having small fan-in (for some definition of “small”). Call this “locality”. Lack of locality makes causal reasoning harder (sometimes astronomically harder) in some cases, but it does not break causal inference algorithms; it only makes them slower.
The time-turner implementation where you enumerate all possible universes, and select one that passed the self-consistency test, can be represented by a DAG; it’s causal. It’s just that the moment where the time-traveler lands depends on the whole space of later universes. That doesn’t make the graph cyclic; it’s just a large fanin. If the underlying physics is discrete and the range of time-turners is time limited to six hours, it’s not even infinite fanin. And if you blur out irrelevant details, like we usually do when reasoning about physical processes, you can even construct manageable causal graphs of events involving time-turner usage, and use them to predict experimental outcomes!
You can imagine universes which violate the small-fanin criterion in other ways. For example, imagine a Conway’s life-like game on an infinite plane, with a special tile type that copies a randomly-selected other cell in each timestep, with each cell having a probability of being selected that falls off with distance. Such cells would also have infinite fan-in, but there would still be a DAG representing the causal structure of that universe. It used to be believed that gravity behaved this way.
I think there’s a subtle but important difference between saying that time travel can be represented by a DAG, and saying that you can compute legal time travel timelines using a DAG.
There’s one possible story you can tell about time turners where the future “actually” affects the past, which is conceptually simple but non-causal.
There’s also a second possible story you can tell about time turners where some process implementing the universe “imagines” a bunch of possible futures and then prunes the ones that aren’t consistent with the time turner rules. This computation is causal, and from the inside it’s indistinguishable from the first story.
But if reality is like the second story, it seems very strange to me that the rules used for imagining and pruning just happen to implement the first story. Why does it keep only the possible futures that look like time travel, if no actual time travel is occurring?
The first story is parsimonious in a way that the second story is not, because it supposes that the rules governing which timelines are allowed to exist are a result of how the timelines are implemented, rather than being an arbitrary restriction applied to a vastly-more-powerful architecture that could in principle have much more permissive rules.
So I think the first story can be criticized for being non-causal, and the second can be criticized for being non-parsimonious, and it’s important to keep them in separate mental buckets so that you don’t accidentally do an equivocation fallacy where you use the second story to defend against the first criticism and the first story to defend against the second.
Aside from the amount of fan-in, another difference that seems important to me is that a “normal” simulation is guaranteed to have exactly one continuation. If you do the thing where you simulate a bunch of possible futures and then prune the contradictory ones then there’s no intrinsic reason you couldn’t end up with multiple self-consistent futures—or with zero!
The property you talk about the universe having is an interesting one, but I don’t think causality is the right word for it. You’ve smuggled an extra component into the definition: each node having small fan-in (for some definition of “small”). Call this “locality”. Lack of locality makes causal reasoning harder (sometimes astronomically harder) in some cases, but it does not break causal inference algorithms; it only makes them slower.
The time-turner implementation where you enumerate all possible universes, and select one that passed the self-consistency test, can be represented by a DAG; it’s causal. It’s just that the moment where the time-traveler lands depends on the whole space of later universes. That doesn’t make the graph cyclic; it’s just a large fanin. If the underlying physics is discrete and the range of time-turners is time limited to six hours, it’s not even infinite fanin. And if you blur out irrelevant details, like we usually do when reasoning about physical processes, you can even construct manageable causal graphs of events involving time-turner usage, and use them to predict experimental outcomes!
You can imagine universes which violate the small-fanin criterion in other ways. For example, imagine a Conway’s life-like game on an infinite plane, with a special tile type that copies a randomly-selected other cell in each timestep, with each cell having a probability of being selected that falls off with distance. Such cells would also have infinite fan-in, but there would still be a DAG representing the causal structure of that universe. It used to be believed that gravity behaved this way.
I think there’s a subtle but important difference between saying that time travel can be represented by a DAG, and saying that you can compute legal time travel timelines using a DAG.
There’s one possible story you can tell about time turners where the future “actually” affects the past, which is conceptually simple but non-causal.
There’s also a second possible story you can tell about time turners where some process implementing the universe “imagines” a bunch of possible futures and then prunes the ones that aren’t consistent with the time turner rules. This computation is causal, and from the inside it’s indistinguishable from the first story.
But if reality is like the second story, it seems very strange to me that the rules used for imagining and pruning just happen to implement the first story. Why does it keep only the possible futures that look like time travel, if no actual time travel is occurring?
The first story is parsimonious in a way that the second story is not, because it supposes that the rules governing which timelines are allowed to exist are a result of how the timelines are implemented, rather than being an arbitrary restriction applied to a vastly-more-powerful architecture that could in principle have much more permissive rules.
So I think the first story can be criticized for being non-causal, and the second can be criticized for being non-parsimonious, and it’s important to keep them in separate mental buckets so that you don’t accidentally do an equivocation fallacy where you use the second story to defend against the first criticism and the first story to defend against the second.
Aside from the amount of fan-in, another difference that seems important to me is that a “normal” simulation is guaranteed to have exactly one continuation. If you do the thing where you simulate a bunch of possible futures and then prune the contradictory ones then there’s no intrinsic reason you couldn’t end up with multiple self-consistent futures—or with zero!