Being 18 I’m in a pretty similar situation to yourself, and I’ve looked extensively into the whole problem.
The things you mention as talents are actually qualities, programing on the other hand counts as a skill (of course if you’re particularly good at it, it counts as a talent).
What you need to look for is your Element (to use Sir Ken Robinson’s concept). While it is all well and noble to pick what you want to do on the basis of mitigating existential risk your best bet is to find something you’re both good at and passionate about (your talent). Then work out how to use that to reduce existential risk. Not only will you end up happier for it, you’ll also be more effective in saving the world.
Of course that doesn’t mean other areas of skill aren’t worth developing, so long as their useful they’re good, but in terms of your career go with what you’re most passionate about. If you’re serious about learning maths you should take a look at http://www.khanacademy.org/ , the videos on there cover a great deal of stuff, from the very basics to calculus to matrices and the likes.
I wouldn’t worry too much if I were you about which existential risk is greatest but rather which one you are most suited to negate. In fact if you were to try to negate a potential horror that requires a certain area of expertise to counteract, and not being particularly good (that is just average) you could end up screwing things up even worse. Eliezer posts about how he nearly destroyed the world are a good example of that.
noble to pick what you want to do on the basis of mitigating existential risk your best bet is to find something you’re both good at and passionate about (your talent).
I agree. If there were a ExR from asteroid impacts, yet we couldn’t do much about it, I wouldn’t think about trying to solve it. So I’m already of the mind that I’m trying to use my comparative advantage to reduce net ExR as much as possible. You could use an equation I guess. X (how much attention it deserves) = Y (probability of risk occurring) * Z (probability of my actions reducing Y).
As far as happiness goes, I think that if I didn’t choose a career that allowed me to reduce ExR effectively, I would feel guilty. I’d feel happy to go to work if I knew it was for the good of humanity. I also think that other things like meditation, stoicism, cognitive enhancement, etc can provide happiness if you have a boring day job (like highschool).
Well, I’m pretty sure I’m not going to be an FAI programmer. If one assumes that the biggest bottleneck to AGI is large insight from geniuses, then I think the best way to help AGI development would not be to try to be a genius (if you aren’t one), but to try to work on education reform so that there are more geniuses working on FAI. Or to advertise to geniuses.
If one assumes that the biggest bottleneck to AGI is large insight from geniuses,
Don’t be too sure about that. You might want to take a look at Little Bets: How Breakthrough Ideas Emerge from Small Discoveries. My own experience when dealing with difficult problems has been that an incremental approach can be much more productive than sitting at my desk thinking real hard about the parts I don’t know how to do. I just go ahead and do the parts that I can do, and let the harder problems percolate in my mind. By the time I’ve got the manageable parts done, I’ve learned enough about the problem, turned vague abstractions into concrete realizations, and reduced the uncertainty enough that the hard parts often fall right into place. Consider a probability distribution over variables x1, …, xn, with the variables representing solutions to parts of a problem. P(xn) may be quite diffuse and spread out, but P(xn | x1, x2, …, xk) may be much more concentrated.
I don’t expect the problem of FAI to fall easily, but any incremental advance can help set the stage for the definitive advances.
Good point! I also appreciate how a lot of your ideas are referenced to books. Assuming your right (I think I would have to know much more about AGI in order to evaluate the claim with any certainty), the next obvious question is, what’s limiting progress? Are there not enough people making small discoveries (good reason to go into the field)? Are they not sharing their discoveries (don’t know how you’d fix this one)? Is there not enough funding for them to do their work (good reason to stay out unless you plan on outcompeting everybody because you feel that’s what you can do best to help)?
Being 18 I’m in a pretty similar situation to yourself, and I’ve looked extensively into the whole problem.
The things you mention as talents are actually qualities, programing on the other hand counts as a skill (of course if you’re particularly good at it, it counts as a talent).
What you need to look for is your Element (to use Sir Ken Robinson’s concept). While it is all well and noble to pick what you want to do on the basis of mitigating existential risk your best bet is to find something you’re both good at and passionate about (your talent). Then work out how to use that to reduce existential risk. Not only will you end up happier for it, you’ll also be more effective in saving the world.
Of course that doesn’t mean other areas of skill aren’t worth developing, so long as their useful they’re good, but in terms of your career go with what you’re most passionate about. If you’re serious about learning maths you should take a look at http://www.khanacademy.org/ , the videos on there cover a great deal of stuff, from the very basics to calculus to matrices and the likes.
I wouldn’t worry too much if I were you about which existential risk is greatest but rather which one you are most suited to negate. In fact if you were to try to negate a potential horror that requires a certain area of expertise to counteract, and not being particularly good (that is just average) you could end up screwing things up even worse. Eliezer posts about how he nearly destroyed the world are a good example of that.
I hope some of that helps.
I agree. If there were a ExR from asteroid impacts, yet we couldn’t do much about it, I wouldn’t think about trying to solve it. So I’m already of the mind that I’m trying to use my comparative advantage to reduce net ExR as much as possible. You could use an equation I guess. X (how much attention it deserves) = Y (probability of risk occurring) * Z (probability of my actions reducing Y).
As far as happiness goes, I think that if I didn’t choose a career that allowed me to reduce ExR effectively, I would feel guilty. I’d feel happy to go to work if I knew it was for the good of humanity. I also think that other things like meditation, stoicism, cognitive enhancement, etc can provide happiness if you have a boring day job (like highschool).
Well, I’m pretty sure I’m not going to be an FAI programmer. If one assumes that the biggest bottleneck to AGI is large insight from geniuses, then I think the best way to help AGI development would not be to try to be a genius (if you aren’t one), but to try to work on education reform so that there are more geniuses working on FAI. Or to advertise to geniuses.
Don’t be too sure about that. You might want to take a look at Little Bets: How Breakthrough Ideas Emerge from Small Discoveries. My own experience when dealing with difficult problems has been that an incremental approach can be much more productive than sitting at my desk thinking real hard about the parts I don’t know how to do. I just go ahead and do the parts that I can do, and let the harder problems percolate in my mind. By the time I’ve got the manageable parts done, I’ve learned enough about the problem, turned vague abstractions into concrete realizations, and reduced the uncertainty enough that the hard parts often fall right into place. Consider a probability distribution over variables x1, …, xn, with the variables representing solutions to parts of a problem. P(xn) may be quite diffuse and spread out, but P(xn | x1, x2, …, xk) may be much more concentrated.
I don’t expect the problem of FAI to fall easily, but any incremental advance can help set the stage for the definitive advances.
Good point! I also appreciate how a lot of your ideas are referenced to books. Assuming your right (I think I would have to know much more about AGI in order to evaluate the claim with any certainty), the next obvious question is, what’s limiting progress? Are there not enough people making small discoveries (good reason to go into the field)? Are they not sharing their discoveries (don’t know how you’d fix this one)? Is there not enough funding for them to do their work (good reason to stay out unless you plan on outcompeting everybody because you feel that’s what you can do best to help)?