Reading your posts gives me the impression that we are both loosely pointing at the same object, but with fairly large differences in terminology and formalism.
While computing exact counter-factuals has issues with chaos, I don’t think this poses a problem for my earlier proposal. I don’t think it is necessary that the AGI is able to exactly compute the counterfactual entropy production, just that it makes a reasonably accurate approximation.[1]
I think I’m in agreement with your premise that the “constitutionalist form of agency” is flawed. IThe absence of entropy (or indeed any internal physical resource management) from the canonical Lesswrong agent foundation model is clearly a major issue. My loose thinking on this is that bayesian networks are not a natural description of the physical world at all, although they’re an appropriate tool for how certain, very special types of open-systems, “agentic optimizers” model the world.
I have had similar thoughts to what has motivated your post on the “causal backbone”. I believe “the heterogenous fluctuations will sometimes lead to massive shifts in how the resources are distributed” is something we would see in a programmable, unbounded optimizer[2]. But I’m not sure if attempting to model this as there being a “causal backbone” is the description that is going to cut reality at the joints, due to difficulties with the physicality of causality itself (see work by Jenann Ismael).
You can construct pathological environments in which the AGI’s computation (with limited physical resources) of the counterfactual entropy production is arbitrarily large (and the resulting behaviour is arbitrarily bad). I don’t see this as a flaw with the proposal as this issue of being able to construct pathological environments exists for any safe AGI proposal.
Reading your posts gives me the impression that we are both loosely pointing at the same object, but with fairly large differences in terminology and formalism.
While computing exact counter-factuals has issues with chaos, I don’t think this poses a problem for my earlier proposal. I don’t think it is necessary that the AGI is able to exactly compute the counterfactual entropy production, just that it makes a reasonably accurate approximation.[1]
I think I’m in agreement with your premise that the “constitutionalist form of agency” is flawed. IThe absence of entropy (or indeed any internal physical resource management) from the canonical Lesswrong agent foundation model is clearly a major issue. My loose thinking on this is that bayesian networks are not a natural description of the physical world at all, although they’re an appropriate tool for how certain, very special types of open-systems, “agentic optimizers” model the world.
I have had similar thoughts to what has motivated your post on the “causal backbone”. I believe “the heterogenous fluctuations will sometimes lead to massive shifts in how the resources are distributed” is something we would see in a programmable, unbounded optimizer[2]. But I’m not sure if attempting to model this as there being a “causal backbone” is the description that is going to cut reality at the joints, due to difficulties with the physicality of causality itself (see work by Jenann Ismael).
You can construct pathological environments in which the AGI’s computation (with limited physical resources) of the counterfactual entropy production is arbitrarily large (and the resulting behaviour is arbitrarily bad). I don’t see this as a flaw with the proposal as this issue of being able to construct pathological environments exists for any safe AGI proposal.
Ctrl-F”Goal like correlations” here