I agree that “offense is all about status” is probably too simple and that a more complex and refined theory can have greater explanatory/predictive value. On the other hand, the simplicity does have a benefit in that it’s easier to apply when you’re addressing an audience. It’s probably easier to think “will what I write/say cause someone to lose social status?” (with a broad view of what constitutes status) than to try to keep more detailed models of the audience’s minds (ETA: except in situations where your social brain works well and does the latter for you automatically).
If you disagree, can you try to distill your theory into some practical advice for writers?
The context here is a human dealing with a human. Thus it can be considered a useful heuristic to think “will what I write/say cause someone to lose social status?” and depending on the reply that your brain returns, judge whether it could be considered offensive (since this might prove to be a more accurate means of judging offense than trying to do so directly).
Naturally, if you were actually trying to develop an artificial intelligence that needed to refrain from offending people, it probably wouldn’t be as easy as just ‘calculating the objective status change’ and basing the response on that.
I agree that “offense is all about status” is probably too simple and that a more complex and refined theory can have greater explanatory/predictive value. On the other hand, the simplicity does have a benefit in that it’s easier to apply when you’re addressing an audience. It’s probably easier to think “will what I write/say cause someone to lose social status?” (with a broad view of what constitutes status) than to try to keep more detailed models of the audience’s minds (ETA: except in situations where your social brain works well and does the latter for you automatically).
If you disagree, can you try to distill your theory into some practical advice for writers?
The context here is a human dealing with a human. Thus it can be considered a useful heuristic to think “will what I write/say cause someone to lose social status?” and depending on the reply that your brain returns, judge whether it could be considered offensive (since this might prove to be a more accurate means of judging offense than trying to do so directly).
Naturally, if you were actually trying to develop an artificial intelligence that needed to refrain from offending people, it probably wouldn’t be as easy as just ‘calculating the objective status change’ and basing the response on that.