Very good point. Safety in Engineering is often summarized as “nothing bad happens”, without anthropomorphic nuance, and without “intent”: An engineered system can just go wrong. It seems “AI Safety” often glosses over or ignores such facets. Is it that “AI Safety” is cast as looking into the creation of “reasonable A(G)I” ?
Very good point. Safety in Engineering is often summarized as “nothing bad happens”, without anthropomorphic nuance, and without “intent”: An engineered system can just go wrong. It seems “AI Safety” often glosses over or ignores such facets. Is it that “AI Safety” is cast as looking into the creation of “reasonable A(G)I” ?