I’ve been working on a response to the NTIA request for comments on AI Accountability over the last few months. It’s likely that I’ll also submit something to the OSTP request.
I’ve learned a few useful things from talking to AI governance and policy folks. Some of it is fairly intuitive but still worth highlighting (e.g., try to avoid jargon, remember that the reader doesn’t share many assumptions that people in AI safety take for granted, remember that people have many different priorities). Some of it is less intuitive (e.g., what actually happens with the responses? How long should your response be? How important is it to say something novel? What kinds of things are policymakers actually looking for?)
If anyone is looking for advice, feel free to DM me.
I’ve been working on a response to the NTIA request for comments on AI Accountability over the last few months. It’s likely that I’ll also submit something to the OSTP request.
I’ve learned a few useful things from talking to AI governance and policy folks. Some of it is fairly intuitive but still worth highlighting (e.g., try to avoid jargon, remember that the reader doesn’t share many assumptions that people in AI safety take for granted, remember that people have many different priorities). Some of it is less intuitive (e.g., what actually happens with the responses? How long should your response be? How important is it to say something novel? What kinds of things are policymakers actually looking for?)
If anyone is looking for advice, feel free to DM me.