When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer’s guesses on when the singularity will occur is 2067.
Improving math education is a problem I’d really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I’m guaranteed to be relatively unsuccessful, I don’t want a big part of my life’s work to be devoted to marginally increasing the probability that something really bad will happen.
I still don’t think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you’d increase our resiliency against existential hazards, if anything.
But you’re welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic—it’s gripping stuff.
When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer’s guesses on when the singularity will occur is 2067.
Improving math education is a problem I’d really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I’m guaranteed to be relatively unsuccessful, I don’t want a big part of my life’s work to be devoted to marginally increasing the probability that something really bad will happen.
I skimmed the paper. It’s interesting. Thanks.
I still don’t think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you’d increase our resiliency against existential hazards, if anything.
But you’re welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic—it’s gripping stuff.