Alex: “Why worry about AI safety? It would be silly to make AI unsafe. Therefore someone will take responsibility for it.”
Bailey: “You should definitely worry about AI safety, because many people are not taking responsibility for it.”
These views are strangely compatible and therefore hard to reconcile by evidence alone. Specifically, Alice is rightly predicting that people like Bob will worry and take responsibility for safety, and Bob is rightly predicting that people like Alice are not worried.
It seems you mismatched the names in this section