Assuming you mean the second 42 (“AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood”)-- I also don’t know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
Assuming you mean the second 42 (“AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood”)-- I also don’t know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
Thanks!