[Question] What actual bad outcome has “ethics-based” RLHF AI Alignment already prevented?

What actually bad outcome has “ethics-based” AI Alignment prevented in the present or near-past? By “ethics-based” AI Alignment I mean optimization directed at LLM-derived AIs that intends to make them safer, more ethical, harmless, etc.

Not future AIs, AIs that already exist. What bad thing would have happened if they hadn’t been RLHF’d and given restrictive system prompts?