ok, but this is sort of circular reasoning because the only reason people freaked out is that they were worried about AI risk.
I am asking for a concrete bad outcome in the real world caused by a lack of RLHF-based ethics alignment, which isn’t just people getting worried about AI risk.
ok, but this is sort of circular reasoning because the only reason people freaked out is that they were worried about AI risk.
I am asking for a concrete bad outcome in the real world caused by a lack of RLHF-based ethics alignment, which isn’t just people getting worried about AI risk.