Name something that you do not do but should be told you ought...
Nada, I am already told enough, more than I can handle.
Name something that you do not do but wish you would be told you ought...
I wish more people would just tell me to relax and have some fun (people with arguments in favor of doing so).
Name something that you do not do but should do...
Existential risk mitigation. I don’t do it because of uncertainty and psychological distress caused by the fear of having no more time to do what I would like to do based on naive introspection. And I am too selfish to give away large amounts of money that I fear I might need for my self at some point
Name something that you do not do but wish you did...
Be more focused on improving my education. I am too distracted by all the shiny and interesting things out there, by problems I can’t solve and real life needs.
Name something that you do not do but are told you ought...
There are many irrational examples here. I am told not to care so much about my health and actually have some fun by drinking lots of alcohol. I don’t drink alcohol because of health concerns but also because I simply don’t like it.
You can learn things with them, and do other fun things.
If you get people into the group who haven’t been exposed to ideas, but would be interested in existential risk mitigation then you have a non-zero impact. If you work with them to try to make money and donate some of that, then you will donate without eating up too much resources.
(That being said, I do think that your sending letters to AI researchers is providing probably-helpful information)
After talking to jsalvatier offline for a bit about the issue of wanting to do AI research, but being afraid to fail, I’ve came to realization that I haven’t fully analyzed why I am afraid of it. I’ll need to take more time to break it down, but my first guess is simply lost time. But it’s very likely that if I try to do the research, I’ll realize really fast if I am at all capable of doing it, so the time wasted would be a year or two, which is not too bad IMO.
You said you are interested in existential-risk reduction. I don’t know if you mean specifically AI research by it, but if you do, we should try to solve this problem together. I am sure there are other people in similar situation (and there will be even more in the future), so it makes sense to create some kind of standard way of answer the question “am I cut out for this”.
I have the same problem with knowing I should work on existential risk mitigation, but I don’t. (Not directly, at least. Ideally I would do FAI related research.)
My fear is that I won’t be good at it and/or I won’t like it. It’ll end up as a waste of my time and money, and I won’t contribute very much, or may be even worse, I’ll waste other people’s time and resources.
My impression—and admittedly this is just an impression—is that there are few enough x-risk workers that someone doesn’t have to be especially good at doing x-risk work to be able to do more good on the margin as an x-risk worker compared to doing other things. This seems to be especially true for people who are good at motivating themselves (so that they don’t need a lot of managerial support) and who are willing to do things that aren’t particularly glamorous (but it’s probably true even when neither of those is the case, so don’t use that as an excuse).
“I might not like it” sounds like a fully general argument to me, and there are cheap tests you can do on the other issues. I suggest sending a resume to SIAI or someplace similar; if they think you wouldn’t be useful there, you’re at least no worse off than you are now.
Thanks, that’s the point I’ve also been considering, but I don’t know how true it is. I am going to (and already started to) talk to people at SIAI and see if they could use my help for anything.
No, I still donate and I am doing my best to raise the sanity waterline. But I think AdeleneDawner brought up a good point: there might not be enough x-risk workers to justify not doing it if I can.
You should relax and have some fun. It could perhaps be justified by increased productivity and motivation on other tasks, but the real reason is so that you can enjoy yourself. Recursive functions need base cases, maximizing the value of action needs to be balanced by acting. You have needs, and one (subgroup) is the need to relax and have some fun.
If you would like more specifics, please let me know.
Nada, I am already told enough, more than I can handle.
I wish more people would just tell me to relax and have some fun (people with arguments in favor of doing so).
Existential risk mitigation. I don’t do it because of uncertainty and psychological distress caused by the fear of having no more time to do what I would like to do based on naive introspection. And I am too selfish to give away large amounts of money that I fear I might need for my self at some point
Be more focused on improving my education. I am too distracted by all the shiny and interesting things out there, by problems I can’t solve and real life needs.
There are many irrational examples here. I am told not to care so much about my health and actually have some fun by drinking lots of alcohol. I don’t drink alcohol because of health concerns but also because I simply don’t like it.
Would you be happy if someone told you to do something fun in a way which, in your eyes, is likely to reduce existential risk?
Yes. That would probably mean that I could either learn something by reducing existential risks or that it wouldn’t eat up all too much resources.
Try starting a German meetup group.
You can learn things with them, and do other fun things.
If you get people into the group who haven’t been exposed to ideas, but would be interested in existential risk mitigation then you have a non-zero impact. If you work with them to try to make money and donate some of that, then you will donate without eating up too much resources.
(That being said, I do think that your sending letters to AI researchers is providing probably-helpful information)
After talking to jsalvatier offline for a bit about the issue of wanting to do AI research, but being afraid to fail, I’ve came to realization that I haven’t fully analyzed why I am afraid of it. I’ll need to take more time to break it down, but my first guess is simply lost time. But it’s very likely that if I try to do the research, I’ll realize really fast if I am at all capable of doing it, so the time wasted would be a year or two, which is not too bad IMO.
You said you are interested in existential-risk reduction. I don’t know if you mean specifically AI research by it, but if you do, we should try to solve this problem together. I am sure there are other people in similar situation (and there will be even more in the future), so it makes sense to create some kind of standard way of answer the question “am I cut out for this”.
I have the same problem with knowing I should work on existential risk mitigation, but I don’t. (Not directly, at least. Ideally I would do FAI related research.)
My fear is that I won’t be good at it and/or I won’t like it. It’ll end up as a waste of my time and money, and I won’t contribute very much, or may be even worse, I’ll waste other people’s time and resources.
My impression—and admittedly this is just an impression—is that there are few enough x-risk workers that someone doesn’t have to be especially good at doing x-risk work to be able to do more good on the margin as an x-risk worker compared to doing other things. This seems to be especially true for people who are good at motivating themselves (so that they don’t need a lot of managerial support) and who are willing to do things that aren’t particularly glamorous (but it’s probably true even when neither of those is the case, so don’t use that as an excuse).
“I might not like it” sounds like a fully general argument to me, and there are cheap tests you can do on the other issues. I suggest sending a resume to SIAI or someplace similar; if they think you wouldn’t be useful there, you’re at least no worse off than you are now.
Thanks, that’s the point I’ve also been considering, but I don’t know how true it is. I am going to (and already started to) talk to people at SIAI and see if they could use my help for anything.
Is this not working preventing you from doing anything to reduce existential risk mitigation (donation etc.)?
No, I still donate and I am doing my best to raise the sanity waterline. But I think AdeleneDawner brought up a good point: there might not be enough x-risk workers to justify not doing it if I can.
You should relax and have some fun. It could perhaps be justified by increased productivity and motivation on other tasks, but the real reason is so that you can enjoy yourself. Recursive functions need base cases, maximizing the value of action needs to be balanced by acting. You have needs, and one (subgroup) is the need to relax and have some fun.
If you would like more specifics, please let me know.