In a universe without causal structure, I would expect an intelligent agent that uses an internal causal model of the universe to never work.
Of course you can’t really have an intelligent agent with an internal causal model in a universe with no causal structure, so this might seem like a vacuous claim. But it still has the consequence that
P(intelligence is possible|causal universe)>P(intelligence|acausal universe).
If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.
In a universe without causal structure, I would expect an intelligent agent that uses an internal causal model of the universe to never work.
Of course you can’t really have an intelligent agent with an internal causal model in a universe with no causal structure, so this might seem like a vacuous claim. But it still has the consequence that
P(intelligence is possible|causal universe)>P(intelligence|acausal universe).
That seems plausible but I must admit that I don’t know enough details about possible “acausal universes” to be particularly confident.
If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.