An additional distinction between contemporary and future alignment challenges is that the latter concerns the control of physically deployed, self aware system.
Alex Altair has previously highlighted that they will (microscopically) obey time reversal symmetry[1] unlike the information processing of a classical computer program. This recent paper published in Entropy[2] touches on the idea that a physical learning machine (the “brain” of a causal agent) is an “open irreversible dynamical system” (pg 12-13).
The purpose for reversible automata is simply to model the fact that our universe is reversible, is it not? I don’t see how that weighs on the question at hand here.
An additional distinction between contemporary and future alignment challenges is that the latter concerns the control of physically deployed, self aware system.
Alex Altair has previously highlighted that they will (microscopically) obey time reversal symmetry[1] unlike the information processing of a classical computer program. This recent paper published in Entropy[2] touches on the idea that a physical learning machine (the “brain” of a causal agent) is an “open irreversible dynamical system” (pg 12-13).
Altair A. “Consider using reversible automata for alignment research” 2022
Milburn GJ, Shrapnel S, Evans PW. “Physical Grounds for Causal Perspectivalism” Entropy. 2023; 25(8):1190. https://doi.org/10.3390/e25081190
The purpose for reversible automata is simply to model the fact that our universe is reversible, is it not? I don’t see how that weighs on the question at hand here.