Sorry, I sort of asked this question in a thread here, but I’m interested enough in answers that I’m going to ask it again.
Does it seem like a good idea for the long-term future of humanity for me to become a math teacher or producer of educational math software? Will having a generation of better math and science people be good or bad for humanity on net?
If I included a bit about existential risks in my lecturing/math software would that cause people to take them more seriously or less seriously?
In the unlikely event that you end up significantly improving the amount of mathematical expertise in humanity, you should be very pleased with yourself.
It’s definitely not a bad cause. You should do it if it’s something that would engage and satisfy you. If you turn out not to be suited for it, no harm done; find something else you’re good at.
It’s fine to include some low-probability catastrophe risk management in your overall planning. But are you considering all the possible catastrophes, or just one particular route to unfriendly AI (one unlocked by your marginal recruitment of mathematically capable tinkerers)?
Wouldn’t furthering our mathematical and technological prowess as soon as possible mitigate many catastrophes? See the movie Armageddon, for instance :)
Maybe general AI is inevitable even at current computing power, so long as a small, persistent cult keeps at it for a few hundred years. If so, I think having more mathematical facility gives a better chance of managing the result.
Real, all-of-the-sudden, self-optimizing with increasing speed, with limits way above human, general AI is 99.999% not implemented in the next 10 years, at least. The only reason I consider it so likely (and don’t feel comfortable predicting, say, 50 years forward) is the possibility of apparent limits in computing hardware being demolished by some unforeseen breakthrough.
When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer’s guesses on when the singularity will occur is 2067.
Improving math education is a problem I’d really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I’m guaranteed to be relatively unsuccessful, I don’t want a big part of my life’s work to be devoted to marginally increasing the probability that something really bad will happen.
I still don’t think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you’d increase our resiliency against existential hazards, if anything.
But you’re welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic—it’s gripping stuff.
Sorry, I sort of asked this question in a thread here, but I’m interested enough in answers that I’m going to ask it again.
Does it seem like a good idea for the long-term future of humanity for me to become a math teacher or producer of educational math software? Will having a generation of better math and science people be good or bad for humanity on net?
If I included a bit about existential risks in my lecturing/math software would that cause people to take them more seriously or less seriously?
In the unlikely event that you end up significantly improving the amount of mathematical expertise in humanity, you should be very pleased with yourself.
It’s definitely not a bad cause. You should do it if it’s something that would engage and satisfy you. If you turn out not to be suited for it, no harm done; find something else you’re good at.
So you’re not much afraid that people will develop artificial general intelligence before figuring out how to make it friendly?
It’s fine to include some low-probability catastrophe risk management in your overall planning. But are you considering all the possible catastrophes, or just one particular route to unfriendly AI (one unlocked by your marginal recruitment of mathematically capable tinkerers)?
Wouldn’t furthering our mathematical and technological prowess as soon as possible mitigate many catastrophes? See the movie Armageddon, for instance :)
Maybe general AI is inevitable even at current computing power, so long as a small, persistent cult keeps at it for a few hundred years. If so, I think having more mathematical facility gives a better chance of managing the result.
Real, all-of-the-sudden, self-optimizing with increasing speed, with limits way above human, general AI is 99.999% not implemented in the next 10 years, at least. The only reason I consider it so likely (and don’t feel comfortable predicting, say, 50 years forward) is the possibility of apparent limits in computing hardware being demolished by some unforeseen breakthrough.
When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer’s guesses on when the singularity will occur is 2067.
Improving math education is a problem I’d really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I’m guaranteed to be relatively unsuccessful, I don’t want a big part of my life’s work to be devoted to marginally increasing the probability that something really bad will happen.
I skimmed the paper. It’s interesting. Thanks.
I still don’t think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you’d increase our resiliency against existential hazards, if anything.
But you’re welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic—it’s gripping stuff.