It seems to me that the two most scary risks we face are nanotechnology and AI, and that compared to them, nothing else matters at all. So it seems like our best strategy would probably be to try to understand those risks as thoroughly as possible, and to throw vast amounts of money at research to see whether there’s any way to deal with them safely.
So if that’s our best plan, and I say it into the magic time idea translating phone, maybe something good will come out of Archimedes’ end, like ‘try to understand the Roman Republic and how you might counter it before it shreds you’. And who knows. Maybe if the Greeks and the Carthaginians had understood the nature of the threat, they might have been able to do something about it.
But it’s not clear that preventing the rise of the Roman Empire would optimize the modern world.
It seems to me that the two most scary risks we face are nanotechnology and AI, and that compared to them, nothing else matters at all. So it seems like our best strategy would probably be to try to understand those risks as thoroughly as possible, and to throw vast amounts of money at research to see whether there’s any way to deal with them safely.
So if that’s our best plan, and I say it into the magic time idea translating phone, maybe something good will come out of Archimedes’ end, like ‘try to understand the Roman Republic and how you might counter it before it shreds you’. And who knows. Maybe if the Greeks and the Carthaginians had understood the nature of the threat, they might have been able to do something about it.
But it’s not clear that preventing the rise of the Roman Empire would optimize the modern world.