If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.
If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.