I didn’t like this article at all. Loads of references and mathematics all founded on an absurd premise. That unspecified AGIs and AGI supported humanity would prefer not to harvest the future light cone just because they can think really fast. Most possible mind designs just don’t care.
Facing the future it appears that looking outwards into space is looking into the past, for the future lies in innerspace, not outerspace.
If there is just one agent that disagrees all the navel gazer AIs in the world become irrelevant.
That unspecified AGIs and AGI supported humanity would prefer not to harvest the future light cone just because they can think really fast
See my other replies—the argument is based on economic rate of return (risk adjusted doubling time or exponential growth of your population/intelligence/GDP). Interstellar expansion has a terrible growth rate compared to riding Moore’s Law. It also assumes that space is empty.
I didn’t like this article at all. Loads of references and mathematics all founded on an absurd premise. That unspecified AGIs and AGI supported humanity would prefer not to harvest the future light cone just because they can think really fast. Most possible mind designs just don’t care.
If there is just one agent that disagrees all the navel gazer AIs in the world become irrelevant.
See my other replies—the argument is based on economic rate of return (risk adjusted doubling time or exponential growth of your population/intelligence/GDP). Interstellar expansion has a terrible growth rate compared to riding Moore’s Law. It also assumes that space is empty.