Maybe a shot in the dark, but there might be some connection with that paper a few years back Causal Entropic Forces (more accessible summary). They define “causal path entropy” as basically the number of different paths you can go down starting from a certain point, which might be related to or the same as what you call “power”. And they calculate some examples of what happens if you maximize this (in a few different contexts, all continuous not discrete), and get fun things like (what they generously call) “tool use”. I’m not sure that paper really adds anything important conceptually that you don’t already know, but just wanted to point that out, and PM me if you want help decoding their physics jargon. :-)
Yeah, this is a great connection which I learned about earlier in the summer. I think this theory explains what’s going on when they say
They argue that simple mechanical systems that are postulated to follow this rule show features of “intelligence,” hinting at a connection between this most-human attribute and fundamental physical laws.
Basically, since near-optimal agents tend to go towards states of high power, and near-optimal agents are generally ones which are intelligent, observing an agent moving towards a state of high power is Bayesian evidence that it is intelligent. However, as I understand it, they have the causation wrong: instead of physical laws → power-seeking and intelligence, intelligent goal-directed behavior tends to produce power-seeking.
This is great work, nice job!
Maybe a shot in the dark, but there might be some connection with that paper a few years back Causal Entropic Forces (more accessible summary). They define “causal path entropy” as basically the number of different paths you can go down starting from a certain point, which might be related to or the same as what you call “power”. And they calculate some examples of what happens if you maximize this (in a few different contexts, all continuous not discrete), and get fun things like (what they generously call) “tool use”. I’m not sure that paper really adds anything important conceptually that you don’t already know, but just wanted to point that out, and PM me if you want help decoding their physics jargon. :-)
Yeah, this is a great connection which I learned about earlier in the summer. I think this theory explains what’s going on when they say
Basically, since near-optimal agents tend to go towards states of high power, and near-optimal agents are generally ones which are intelligent, observing an agent moving towards a state of high power is Bayesian evidence that it is intelligent. However, as I understand it, they have the causation wrong: instead of physical laws → power-seeking and intelligence, intelligent goal-directed behavior tends to produce power-seeking.
I agree 100% with everything you said.