Thanks for reaching out. I’ve sent you the links in a DM.
I would like to be listed in the list of various AI Safety initiatives.
I’m looking forward to this month’s AI Safety discussion day (I saw yours and Vanessa’s post about it in Diffractor’s Discord).
I’ll start reading other’s maps of Alignment in a couple days, so I would appreciate the link from FLI; thank you. Gyrodiot’s post has several links related to “mapping AI”, including one from FLI (Benefits and Risks of AI), but it seems like a different link than the one you meant.
Thanks for reaching out. I’ve sent you the links in a DM.
I would like to be listed in the list of various AI Safety initiatives.
I’m looking forward to this month’s AI Safety discussion day (I saw yours and Vanessa’s post about it in Diffractor’s Discord).
I’ll start reading other’s maps of Alignment in a couple days, so I would appreciate the link from FLI; thank you. Gyrodiot’s post has several links related to “mapping AI”, including one from FLI (Benefits and Risks of AI), but it seems like a different link than the one you meant.
The FLI map probably refers to The Landscape of AI Safety and Beneficence Research, also in my list but credited to its main author, Richard Mallah.
Yes, that one.