I would argue that it is people in AI Governance (the corporate “Reponsible AI” kind) that should also make an effort to learn more about AI Safety. I know, because I am one of them, and I do not know of many others that have AI Safety as a key research topic in their agenda.
I am currently working on resources to improve AI Safety literacy amongst policy people, tech lawyers, compliance teams etc.
My question to you is: any advice for the rare few in AI Governance that are here? I sometimes post with the hope of getting technical insights from AI Safety researchers. Do you think it’s worth the effort?
I don’t have the technical AI Safety skillset myself. My guess is to show up with specific questions if you need a technical answer, try and make a couple of specific contacts you can run big plans past or reach out to if you unexpectedly get traction, and use your LessWrong presence to establish a pointer to you and your work so people looking for what you’re doing can find you. That seems worthwhile. After that, maybe crosspost when it’s easy? Zvi might be a good example, where it’s relatively easy to crosspost between LessWrong and Substack, though he’s closer to keeping up with incoming news and less building resource posts for the long term.
If I type “lawyer AI safety” into LessWrong’s search, your post comes up, which I assume is something you want.
I would argue that it is people in AI Governance (the corporate “Reponsible AI” kind) that should also make an effort to learn more about AI Safety. I know, because I am one of them, and I do not know of many others that have AI Safety as a key research topic in their agenda.
I am currently working on resources to improve AI Safety literacy amongst policy people, tech lawyers, compliance teams etc.
Stress-Testing Reality Limited | Katalina Hernández | Substack
My question to you is: any advice for the rare few in AI Governance that are here? I sometimes post with the hope of getting technical insights from AI Safety researchers. Do you think it’s worth the effort?
I don’t have the technical AI Safety skillset myself. My guess is to show up with specific questions if you need a technical answer, try and make a couple of specific contacts you can run big plans past or reach out to if you unexpectedly get traction, and use your LessWrong presence to establish a pointer to you and your work so people looking for what you’re doing can find you. That seems worthwhile. After that, maybe crosspost when it’s easy? Zvi might be a good example, where it’s relatively easy to crosspost between LessWrong and Substack, though he’s closer to keeping up with incoming news and less building resource posts for the long term.
If I type “lawyer AI safety” into LessWrong’s search, your post comes up, which I assume is something you want.
Thank you very much for your advice! Actually helps, and thanks for running that search too :).