Perils of optimizing in social contexts
As a special case of not over-optimizing things, I want to point out that an optimizing approach can have a bunch of subtle negative effects in social contexts. The slogan is something like “optimizers make bad friends” … people don’t like being treated as a means rather than an end, and if they get the vibe from social interactions that you’re trying to steer them into something, then they may react badly.
Here I’m referring to things like:
“How do I optimize this conversation to get X to say yes?”
“How do I persuade Y of this particular fact?”
“How can I seem particularly cool/impressive to these people?”,
… where it’s not clear that the other side are willing collaborators in your social objectives.
So I don’t mean to include optimizing for things like:
“How can I give the clearest explanation of this thing, so people aren’t confused about what I’m saying?”
“How can I help Z to feel comfortable and supported?”
I think that people (especially smart people) are often pretty good at getting a vibe that someone is trying to steer them into something (even from relatively little data). When people do notice, it’s often correct for them to treat you as adversarial and penalize you for this. This is for two reasons:
A Bayesian/epistemic reason: people who are optimizing for particular outcomes selectively share information which pushes towards those outcomes. So if you think someone is doing [a lot of] this optimization your best estimate of the true strength of the position they’re pushing should be [much] lower than if you think they’re optimizing less (given otherwise the same observations).
Toy example: if Alice and Bob are students each taking 12 subjects, and you randomly find out Alice’s chemistry grade and it’s a B+, and you hear Bob bragging that his history grade is an A-, you might guess that Alice is overall a stronger student, since it’s decently likely that Bob chose his best grade to brag about.
An incentives reason: we want to shape the social incentive landscape so that people aren’t rewarded for trying to manipulate us. If we just do the Bayesian response, it will still often be correct for people to invest some in trying to manipulate (in theory they know more about how much information to reveal to leave you with the best impression after Bayesian updating).
I don’t think the extra penalty for incentive shaping needs to be too big, but I do think it should be nonzero.
Actually their update should be larger to compensate for the times that they didn’t notice what was happening; this is essentially the same argument as given in Sections III and IV of the (excellent) Integrity for Consequentialists
Correspondingly, I think that we should be especially cautious of optimizing in social contexts to try to get particular responses out of people whom we hope will be our friends/allies. (This is importantly relevant for community-building work.)
I tend to over-optimize in social contexts and am lucky to have friends who understand my vagaries. They do consider it somewhat disturbing though that I seem to try to actively maximize the amount of total benefit we both get out of every conversation, as if it’s a kind of job, but I have a hard time stopping myself from thinking that way.