More likely hypotheses suggest themselves, such as: doing things that would be good in (potentially unlikely worlds) where value is more easily influenced, amassing resources to better understand whether value can be influenced, or having behavior controlled in apparently random (but quite likely extremely destructive) ways that give a tiny probabilistic edge.
An important point that I think doesn’t have a post highlighting it. An AI that only cares about moving one dust speck by one micrometer on some planet in a distant galaxy if that planet satisfies a very unlikely condition (and thus most likely isn’t present in the universe) will still take over the universe on the off-chance that the dust speck is there.
An important point that I think doesn’t have a post highlighting it. An AI that only cares about moving one dust speck by one micrometer on some planet in a distant galaxy if that planet satisfies a very unlikely condition (and thus most likely isn’t present in the universe) will still take over the universe on the off-chance that the dust speck is there.