I think there are at least two levels where you want change to happen—on an individual level, you want people to stop doing a thing they’re doing that hurts you, and on a social level, you want society to be structured so that you and others don’t keep having that same/similar experience.
The second thing is going to be hard, and likely impossible to do completely. But the first thing… responding to this:
It wouldn’t be so bad, if I only heard it fifty times a month. It wouldn’t be so bad, if I didn’t hear it from friends, family, teachers, colleagues. It wouldn’t be so bad, if there were breaks sometimes.
I think it would be healthy and good and enable you to be more effective at creating the change you want in society, if you could arrange for there to be some breaks sometimes. I see in the comments that you don’t want to solve things on your individual level completely yet because there’s a societal problem to solve and you don’t want to lose your motivation, and I get that. (EDIT: I realize that I’m projecting/guessing here a bit, which is dangerous if I guess wrong and you feel erased as a result… so I’m going to flag this as a guess and not something I know. But my guess is the something precious you would lose by caring less about these papercuts has to do with a motivation to fix the underlying problem for a broader group of people). But if you are suffering emotional hurt to the extent that it’s beyond your ability to cope with and you’re responding to people in ways you don’t like or retrospectively endorse, then taking some action to dial the papercut/poke-the-wound frequency back a bit among the people you interact with the most is probably called for.
With that said, it seems to me that while it may be hard to fix society, the few trusted and I assume mostly fairly smart people who you interact with most frequently can be guided to avoid this error, by learning the things about you that don’t fit into their models of “everyone”, and that it would really help if they said “almost all” rather than “all”. People in general may have to rely on models and heuristics into which you don’t fit, but your close friends and family can learn who you are and how to stop poking your sore spots. This gives you a core group of people who you can go be with when you want a break from society in general, and some time to recharge so you can better reengage with changing that society.
As for fixing society, I said above that it may be impossible to do completely, but if I was trying for most good for the greatest number, my angle of attack would be, make a list of the instances where people are typical-minding you, and order that list based on how uncommon the attribute they’re assuming doesn’t exist is. Some aspects of your cognition or personality may be genuinely and literally unique, while others that get elided may be shared by 30% of the population that the person you’re speaking to at the moment just doesn’t have in their social bubble. The things that are least uncommon are both going to be easiest to build a constituency around and get society to adjust to, and have the most people benefit from the change when it happens.
I have a question related to the “Not the same person” part, the answer to which is a crux for me.
Let’s suppose you are imagining a character who is experiencing some feeling. Can that character be feeling what it feels, while you feel something different? Can you be sad while your character is happy, or vice versa?
I find that I can’t—if I imagine someone happy, I feel what I imagine they are feeling—this is the appeal of daydreams. If I imagine someone angry during an argument, I myself feel that feeling. There is no other person in my mind having a separate feeling. I don’t think I have the hardware to feel two people’s worth of feelings at once, I think what’s happening is that my neural hardware is being hijacked to run a simulation of a character, and while this is happening I enter into the mental state of that character, and in important respects my other thoughts and feelings on my own behalf stop.
So for me, I think my mental powers are not sufficient to create a moral patient separate from myself. I can set my mind to simulating what someone different from real-me would be like, and have the thoughts and feelings of that character follow different paths than my thoughts would, but I understand “having a conversation between myself and an imagined character”, which you treat as evidence there are two people involved, as a kind of task-switching, processor-sharing arrangement—there are bottlenecks in my brain that prevent me from running two people at once, and the closest I can come is thinking as one conversation partner and then the next and then back to the first. I can’t, for example, have one conversation partner saying something while the other is not paying attention because they’re thinking of what to say next and only catches half of what was said and so responds inappropriately, which is a thing that I hear is not uncommon in real conversations between two people. And if the imagined conversation involves a pause which in a conversation between two people would involve two internal mental monologues, I can’t have those two mental monologues at once. I fully inhabit each simulation/imagined character as it is speaking, and only one at a time as it is thinking.
If this is true for you as well, then in a morally relevant respect I would say that you and whatever characters you create are only one person. If you create a character who is suffering, and inhabit that character mentally such that you are suffering, that’s bad because you are suffering, but it’s not 2x bad because you and your character are both suffering—in that moment of suffering, you and your character are one person, not two.
I can imagine a future AI with the ability to create and run multiple independent human-level simulations of minds and watch them interact and learn from that interaction, and perhaps go off and do something in the world while those simulations persist without it being aware of their experiences any more. And for such an AI, I would say it ought not to create entities that have bad lives. And if you can honestly say that your brain is different than mine in such a way that you can imagine a character and you have the mental bandwidth to run it fully independently from yourself, with its own feelings that you know somehow other than having it hijack the feeling-bits of your brain and use them to generate feelings which you feel while what you were feeling before is temporarily on pause (which is how I experience the feelings of characters I imagine), and because of this separation you could wander off and do other things with your life and have that character suffer horribly with no ill effects to you except the feeling that you’d done something wrong… then yeah, don’t do that. If you could do it for more than one imagined character at a time, that’s worse, definitely don’t.
But if you’re like me, I think “you imagined a character and that character suffered” is functionally/morally equivalent to “you imagined a character and one person (call it you or your character, doesn’t matter) suffered”—which, in principle that’s bad unless there’s some greater good to be had from it, but it’s not worse than you suffering for some other reason.