It’s possible that reality is even worse than this post suggests, from the perspective of someone keen on using models with an intuitive treatment of time. I’m thinking of things like “relaxed-memory concurrency” (or “weak memory models”) where there is no sequentially consistent ordering of events. The classic example is where these two programs run in parallel, with X and Y initially both holding 0, [write 1 to X; read Y into R1] || [write 1 to Y; read X into R2], and after both programs finish both R1 and R2 contain 0. What’s going on here is that the level of abstraction matters: writing and reading from registers are not atomic operations, but if you thought they were you’re gonna get confused if you expect sequential consistency.
Total ordering: there’s only one possible ordering of all operations, and everyone knows it. (or there’s just one agent in a cybernetic interaction loop.)
Sequential consistency: everyone knows the order of their own operations, but not how they are interleaved with others’ operations (as in this post)
Weak memory: everyone knows the order of their own operations, but others’ operations may be doing stuff to shared resources that aren’t compatible with any interleaving of the operations
I think that the prompt to think about partially ordered time naturally leads one to think about consistency levels—but when thinking about agency, I think it makes more sense to just think about DAGs of events, not reads and writes. Low-level reality doesn’t really have anything that looks like key-value memory. (Although maybe brains do?) And I think there’s no maintaining of invariants in low-level reality, just cause and effect.
Maintaining invariants under eventual (or causal?) consistency might be an interesting way to think about minds. In particular, I think making minds and alignment strategies work under “causal consistency” (which is the strongest consistency level that can be maintained under latency / partitions between replicas), is an important thing to do. It might happen naturally though, if an agent is trained in a distributed environment.
So I think “strong eventual consistency” (CRDTs) and causal consistency are probably more interesting consistency levels to think about in this context than the really weak ones.
It’s possible that reality is even worse than this post suggests, from the perspective of someone keen on using models with an intuitive treatment of time. I’m thinking of things like “relaxed-memory concurrency” (or “weak memory models”) where there is no sequentially consistent ordering of events. The classic example is where these two programs run in parallel, with X and Y initially both holding 0, [write 1 to X; read Y into R1] || [write 1 to Y; read X into R2], and after both programs finish both R1 and R2 contain 0. What’s going on here is that the level of abstraction matters: writing and reading from registers are not atomic operations, but if you thought they were you’re gonna get confused if you expect sequential consistency.
Total ordering: there’s only one possible ordering of all operations, and everyone knows it. (or there’s just one agent in a cybernetic interaction loop.)
Sequential consistency: everyone knows the order of their own operations, but not how they are interleaved with others’ operations (as in this post)
Weak memory: everyone knows the order of their own operations, but others’ operations may be doing stuff to shared resources that aren’t compatible with any interleaving of the operations
See e.g., https://www.cl.cam.ac.uk/~pes20/papers/topics.html#relaxed or this blog for more https://preshing.com/20120930/weak-vs-strong-memory-models/.
(Edited a lot from when originally posted)
(For more info on consistency see the diagram here: https://jepsen.io/consistency )
I think that the prompt to think about partially ordered time naturally leads one to think about consistency levels—but when thinking about agency, I think it makes more sense to just think about DAGs of events, not reads and writes. Low-level reality doesn’t really have anything that looks like key-value memory. (Although maybe brains do?) And I think there’s no maintaining of invariants in low-level reality, just cause and effect.
Maintaining invariants under eventual (or causal?) consistency might be an interesting way to think about minds. In particular, I think making minds and alignment strategies work under “causal consistency” (which is the strongest consistency level that can be maintained under latency / partitions between replicas), is an important thing to do. It might happen naturally though, if an agent is trained in a distributed environment.
So I think “strong eventual consistency” (CRDTs) and causal consistency are probably more interesting consistency levels to think about in this context than the really weak ones.