The context here is a human dealing with a human. Thus it can be considered a useful heuristic to think “will what I write/say cause someone to lose social status?” and depending on the reply that your brain returns, judge whether it could be considered offensive (since this might prove to be a more accurate means of judging offense than trying to do so directly).
Naturally, if you were actually trying to develop an artificial intelligence that needed to refrain from offending people, it probably wouldn’t be as easy as just ‘calculating the objective status change’ and basing the response on that.
The context here is a human dealing with a human. Thus it can be considered a useful heuristic to think “will what I write/say cause someone to lose social status?” and depending on the reply that your brain returns, judge whether it could be considered offensive (since this might prove to be a more accurate means of judging offense than trying to do so directly).
Naturally, if you were actually trying to develop an artificial intelligence that needed to refrain from offending people, it probably wouldn’t be as easy as just ‘calculating the objective status change’ and basing the response on that.