That part really shouldn’t be necessary (even if it may be rational, conditional on some assumptions). In the event that you do decide to devote your time to helping, whether for dignity or whatever else, you should be able to get funding to cover most reasonable forms of upskilling and/or seeing-if-you-can-help trial period.
That said, I think step one would be to figure out where your comparative advantage lies (80,000 hours folk may have thoughts, among others). Certainly some people should be upskilling in ML/CS/Math—though an advanced degree may not be most efficient -, but there are other ways to help.
I realize this doesn’t address the deciding-what’s-true aspect. I’d note there that I don’t think much detailed ML knowledge is necessary to follow Eliezer’s arguments on this. Most of the ML-dependent parts can be summarized as [we don’t know how to do X], [we don’t have any clear plan that we expect will tell us how to do X], similarly for Y, Z, [Either X, Y or Z is necessary for safe AGI].
Beyond that, I think you only need a low prior on our bumping into a good solution while fumbling in the dark and a low prior on sufficient coordination, and things look quite gloomy. Probably you also need to throw in some pessimism on getting safe AI systems to fundamentally improve our alignment research.
you should be able to get funding to cover most reasonable forms of upskilling and/or seeing-if-you-can-help trial period.
Hi Joe! I wonder if you have any pointers as to how to get help? I would like to try to help while being able to pay for rent and food. I think right now I’m may not br articulate enough to write grant proposals and get funding, so I think I could also use somebody to talk to to figure out what’s the most high impact thing I could do.
I wonder if you’d be willing to chat / know anybody who is?
Something like the 80,000 hours career advice seems like a good place to start—or finding anyone who has a good understanding of the range of possibilities (mine is a bit too narrowly slanted towards technical AIS).
If you’ve decided on the AIS direction, then AI Safety Support is worth a look—they do personal calls for advice, and have many helpful links.
That said, I wouldn’t let the idea of “grant proposals” put you off. The forms you’d need to fill for the LTFF are not particularly complicated, and they do give grants for e.g. upskilling—you don’t necessarily need a highly specific/detailed plan.
If you don’t have a clear idea where you might fit in, then the advice links above should help. If/when you do have a clear idea, don’t worry about whether you can articulate it persuasively. If it makes sense, then people will be glad to hear it—and to give you pointers (e.g. fund managers).
E.g. there’s this from Evan Hubinger (who helps with the LTFF):
if you have any idea of any way in which you think you could use money to help the long-term future, but aren’t currently planning on applying for a grant from any grant-making organization, I want to hear about it. Feel free to send me a private message on the EA Forum or LessWrong. I promise I’m not that intimidating :)
Also worth bearing in mind as a general principle that if almost everything you try succeeds, you’re not trying enough challenging things. Just make sure to take negative outcomes as useful information (often you can ask for specific feedback too). There’s a psychological balance to be struck here, but trying at least a little more than you’re comfortable with will generally expand your comfort zone and widen your options.
Thank you so much! I didn’t know 80k does advising! In terms of people with knowledge on the possibilities… I have a background and a career path that doesn’t end up giving me a lot of access to people who know, so I’ll definitely try to get help at 80k.
Also worth bearing in mind as a general principle that if almost everything you try succeeds, you’re not trying enough challenging things. Just make sure to take negative outcomes as useful information (often you can ask for specific feedback too). There’s a psychological balance to be struck here, but trying at least a little more than you’re comfortable with will generally expand your comfort zone and widen your options.
That part really shouldn’t be necessary (even if it may be rational, conditional on some assumptions). In the event that you do decide to devote your time to helping, whether for dignity or whatever else, you should be able to get funding to cover most reasonable forms of upskilling and/or seeing-if-you-can-help trial period.
That said, I think step one would be to figure out where your comparative advantage lies (80,000 hours folk may have thoughts, among others). Certainly some people should be upskilling in ML/CS/Math—though an advanced degree may not be most efficient -, but there are other ways to help.
I realize this doesn’t address the deciding-what’s-true aspect.
I’d note there that I don’t think much detailed ML knowledge is necessary to follow Eliezer’s arguments on this. Most of the ML-dependent parts can be summarized as [we don’t know how to do X], [we don’t have any clear plan that we expect will tell us how to do X], similarly for Y, Z, [Either X, Y or Z is necessary for safe AGI].
Beyond that, I think you only need a low prior on our bumping into a good solution while fumbling in the dark and a low prior on sufficient coordination, and things look quite gloomy. Probably you also need to throw in some pessimism on getting safe AI systems to fundamentally improve our alignment research.
Hi Joe! I wonder if you have any pointers as to how to get help? I would like to try to help while being able to pay for rent and food. I think right now I’m may not br articulate enough to write grant proposals and get funding, so I think I could also use somebody to talk to to figure out what’s the most high impact thing I could do.
I wonder if you’d be willing to chat / know anybody who is?
Something like the 80,000 hours career advice seems like a good place to start—or finding anyone who has a good understanding of the range of possibilities (mine is a bit too narrowly slanted towards technical AIS).
If you’ve decided on the AIS direction, then AI Safety Support is worth a look—they do personal calls for advice, and have many helpful links.
That said, I wouldn’t let the idea of “grant proposals” put you off. The forms you’d need to fill for the LTFF are not particularly complicated, and they do give grants for e.g. upskilling—you don’t necessarily need a highly specific/detailed plan.
If you don’t have a clear idea where you might fit in, then the advice links above should help.
If/when you do have a clear idea, don’t worry about whether you can articulate it persuasively. If it makes sense, then people will be glad to hear it—and to give you pointers (e.g. fund managers).
E.g. there’s this from Evan Hubinger (who helps with the LTFF):
Also worth bearing in mind as a general principle that if almost everything you try succeeds, you’re not trying enough challenging things. Just make sure to take negative outcomes as useful information (often you can ask for specific feedback too). There’s a psychological balance to be struck here, but trying at least a little more than you’re comfortable with will generally expand your comfort zone and widen your options.
Thank you so much! I didn’t know 80k does advising! In terms of people with knowledge on the possibilities… I have a background and a career path that doesn’t end up giving me a lot of access to people who know, so I’ll definitely try to get help at 80k.
This was very encouraging! Thank you.