If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.
That seems plausible but I must admit that I don’t know enough details about possible “acausal universes” to be particularly confident.
If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.