Can you come up with problem scenarios that don’t involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)
Existential risks come to mind—even if you ignore the issue of astronomical waste—as setting a lower bound on how stupid lifeforms like us can afford to be.
(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn’t be so bad to take billions of years to develop in the absence of other optimizers.)