Not sure if this is relevant, but I was thinking why exactly I hate the idea of others reading my mind...
First, my mind being opaque provides me some slack. There are probably many bugs in my thinking, but if you cannot see them, you cannot exploit them. Of course they may over time become noticeable by observing my behavior, but then there is also a chance that I will notice them and fix them. It’s not perfect, but it’s better than being completely transparent. Basically, there is an arms race between bad actors trying to exploit my bugs and me trying to fix them; mind-reading provides an advantage for the bad actors. (Though there are situations, for example a session with an aligned therapist or a trusted friend, where temporary transparency is useful.)
It’s not just bugs, but also the ability to predict success of blackmailing. If people could run simulations of me and determine in advance which attempts to blackmail me will be successful and which ones will fail, I would expect more attempts to blackmail, because there is a cost to an incorrect guess. Even worse if this could be somehow automated (like, you invent a new method to blackmail people, then ask ChatGPT to scan thousands of people and provide a sorted list of ones who would react most desirably from your perspective).
Second objection is that even if mind-reading is only available under certain circumstances, this would create a motivation for people in power to push people into those circumstances. For example, if we imagine that some people are inherently transparent and others are inherently opaque, the transparent ones would be trusted more. So if there are e.g. two candidates for a job, one transparent and one opaque, many employers would choose to hire the transparent one. (Except for situations where you need the employee to be opaque against a third party, e.g. a salesman. But even, if such thing is possible, you would hope for someone who is transparent against you and opaque against others.) The mentally opaque people would find themselves discriminated against.
On the other hand, suppose that the ability of having your mind read depends on what you do. (Like, there are rituals to make your mind more transparent, and rituals to make your mind more opaque.) There I would expect the people in power to pressure me into making myself more transparent, and punish me if I refuse. And if it is a mixed situation, like the transparency is partially heritable and partially trained, the result would be the same.
This is a reason why it is good to object against mind-reading in general. Being transparent is bad for you, but being opaque in a situation where most people are transparent, could also be bad for you if it is a known fact.
I think your comment is mostly relevant and lays out, mechanistically, how speculating about what someone else is thinking can lead to trying to controlthem (a sovereignty violation); i.e.: from exfiltration to infiltration.
Not sure if this is relevant, but I was thinking why exactly I hate the idea of others reading my mind...
First, my mind being opaque provides me some slack. There are probably many bugs in my thinking, but if you cannot see them, you cannot exploit them. Of course they may over time become noticeable by observing my behavior, but then there is also a chance that I will notice them and fix them. It’s not perfect, but it’s better than being completely transparent. Basically, there is an arms race between bad actors trying to exploit my bugs and me trying to fix them; mind-reading provides an advantage for the bad actors. (Though there are situations, for example a session with an aligned therapist or a trusted friend, where temporary transparency is useful.)
It’s not just bugs, but also the ability to predict success of blackmailing. If people could run simulations of me and determine in advance which attempts to blackmail me will be successful and which ones will fail, I would expect more attempts to blackmail, because there is a cost to an incorrect guess. Even worse if this could be somehow automated (like, you invent a new method to blackmail people, then ask ChatGPT to scan thousands of people and provide a sorted list of ones who would react most desirably from your perspective).
Second objection is that even if mind-reading is only available under certain circumstances, this would create a motivation for people in power to push people into those circumstances. For example, if we imagine that some people are inherently transparent and others are inherently opaque, the transparent ones would be trusted more. So if there are e.g. two candidates for a job, one transparent and one opaque, many employers would choose to hire the transparent one. (Except for situations where you need the employee to be opaque against a third party, e.g. a salesman. But even, if such thing is possible, you would hope for someone who is transparent against you and opaque against others.) The mentally opaque people would find themselves discriminated against.
On the other hand, suppose that the ability of having your mind read depends on what you do. (Like, there are rituals to make your mind more transparent, and rituals to make your mind more opaque.) There I would expect the people in power to pressure me into making myself more transparent, and punish me if I refuse. And if it is a mixed situation, like the transparency is partially heritable and partially trained, the result would be the same.
This is a reason why it is good to object against mind-reading in general. Being transparent is bad for you, but being opaque in a situation where most people are transparent, could also be bad for you if it is a known fact.
I think your comment is mostly relevant and lays out, mechanistically, how speculating about what someone else is thinking can lead to trying to control them (a sovereignty violation); i.e.: from exfiltration to infiltration.
Also—
I updated the post to add two more examples of exfiltration: one pertaining to BATNAs, and one pertaining to energy/heat loss.
And I added a visualization of agents as blobs.