This seems great. Would be ok to join one or two times to see if your group and method, and what ever topic you decide to zoom into, is a good fit for me?
I’ve started to collect various AI Safety initiative s here, so that we are aware of each other and hopefully can support each other. Let me know if you want to be listed there too.
Also people who are interested in joining elriggs group, might also be interested in the AI Safety discussion days, men and JJ are organising. Same topic, different format.
FLI have done a map of all AI Safety research (or all that they could find at the time). Would this be a useful recourse for you? I’m not linking it directly, becasue you might want to think for yourself first,before becoming to biased by others ideas. But it seems that at least it would be a useful tool at the literature review stage.
Thanks for reaching out. I’ve sent you the links in a DM.
I would like to be listed in the list of various AI Safety initiatives.
I’m looking forward to this month’s AI Safety discussion day (I saw yours and Vanessa’s post about it in Diffractor’s Discord).
I’ll start reading other’s maps of Alignment in a couple days, so I would appreciate the link from FLI; thank you. Gyrodiot’s post has several links related to “mapping AI”, including one from FLI (Benefits and Risks of AI), but it seems like a different link than the one you meant.
This seems great. Would be ok to join one or two times to see if your group and method, and what ever topic you decide to zoom into, is a good fit for me?
I’ve started to collect various AI Safety initiative s here, so that we are aware of each other and hopefully can support each other. Let me know if you want to be listed there too.
Also people who are interested in joining elriggs group, might also be interested in the AI Safety discussion days, men and JJ are organising. Same topic, different format.
FLI have done a map of all AI Safety research (or all that they could find at the time). Would this be a useful recourse for you? I’m not linking it directly, becasue you might want to think for yourself first,before becoming to biased by others ideas. But it seems that at least it would be a useful tool at the literature review stage.
Thanks for reaching out. I’ve sent you the links in a DM.
I would like to be listed in the list of various AI Safety initiatives.
I’m looking forward to this month’s AI Safety discussion day (I saw yours and Vanessa’s post about it in Diffractor’s Discord).
I’ll start reading other’s maps of Alignment in a couple days, so I would appreciate the link from FLI; thank you. Gyrodiot’s post has several links related to “mapping AI”, including one from FLI (Benefits and Risks of AI), but it seems like a different link than the one you meant.
The FLI map probably refers to The Landscape of AI Safety and Beneficence Research, also in my list but credited to its main author, Richard Mallah.
Yes, that one.