An implication of AI risk is that we, right now, stand at the fulcrum of human history.
Lots of historical people also claimed that they stood at that unique point in history … and were just wrong about it. But my world model also makes that self-important implication (in a specific form), and the meta-level argument for epistemic modesty isn’t enough to nudge me off of the fulcrum-of-history view.
If you buy that, it’s our overriding imperative to do what we can about it, right now. If we miss this one, ~all of future value evaporates.
For me, the implication of standing at the fulcrum of human history is to…read a lot of textbooks and think about hairy computer science problems.
That seems an odd enough conclusion to make it quite distinct from most other people in human history.
If the conclusion were “go over to those people, hit them on the head with a big rock, and take their women & children as slaves” or “acquire a lot of power”, I’d be way more careful.
An implication of AI risk is that we, right now, stand at the fulcrum of human history.
Lots of historical people also claimed that they stood at that unique point in history … and were just wrong about it. But my world model also makes that self-important implication (in a specific form), and the meta-level argument for epistemic modesty isn’t enough to nudge me off of the fulcrum-of-history view.
If you buy that, it’s our overriding imperative to do what we can about it, right now. If we miss this one, ~all of future value evaporates.
For me, the implication of standing at the fulcrum of human history is to…read a lot of textbooks and think about hairy computer science problems.
That seems an odd enough conclusion to make it quite distinct from most other people in human history.
If the conclusion were “go over to those people, hit them on the head with a big rock, and take their women & children as slaves” or “acquire a lot of power”, I’d be way more careful.