Rohin’s opinion: I first want to note my violent agreement with the notion that a major scary thing is “consequentialist reasoning”, and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans.
What major scary thing will be next?
“Newton’s flaming laser sword”?
Doing experiments?
Testing theories? Before making major plans based on them?
Understanding the world?
A convergent drive or instrumental goal, not to ‘avoid dying’ but to create backups, and other copies running? Eventually running on a variety of stacks or substrates to avoid risks across types (like solar storms and EMPs)? Spreading to other planets so planetary risks aren’t existential risks?
What major scary thing will be next?
“Newton’s flaming laser sword”?
Doing experiments?
Testing theories? Before making major plans based on them?
Understanding the world?
A convergent drive or instrumental goal, not to ‘avoid dying’ but to create backups, and other copies running? Eventually running on a variety of stacks or substrates to avoid risks across types (like solar storms and EMPs)? Spreading to other planets so planetary risks aren’t existential risks?