Interesting how many of these are “democracy / citizenry-involvement” oriented. Strongly agree with 18 (whistleblower protection) and 38 (simulate cyber attacks).
20 (good internal culture), 27 (technical AI people on boards) and 29 (three lines of defense) sound good to me, I’m excited about 31 if mandatory interpretability standards exist.
42 (on sentience) seems pretty important but I don’t know what it would mean.
Assuming you mean the second 42 (“AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood”)-- I also don’t know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
Interesting how many of these are “democracy / citizenry-involvement” oriented. Strongly agree with 18 (whistleblower protection) and 38 (simulate cyber attacks).
20 (good internal culture), 27 (technical AI people on boards) and 29 (three lines of defense) sound good to me, I’m excited about 31 if mandatory interpretability standards exist.
42 (on sentience) seems pretty important but I don’t know what it would mean.
This is super late, but I recently posted: Improving the Welfare of AIs: A Nearcasted Proposal
Assuming you mean the second 42 (“AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood”)-- I also don’t know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
Thanks!