Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it’s a move to suppress imperfectly expressed criticism.
The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don’t independently endorse moralizing. While this probably isn’t the best thing I could do if I were perfectly poised, I don’t think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.
You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that’s not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn’t, which is important information to have! Occasionally bright young people with a moral compass get in touch with me because they can see that I’m conspicuously behaving in a not-ethically-backwards way in proximity to something interesting but sketchy that they were considering getting involved with. Having clear examples to point to is helpful, and confrontation produces clear examples.
A contributing factor is that I (and I think Jessica too) felt time pressure here because it seems to me like there is an attempt to build social momentum against a specific target, which transforms complaints from complementary contributions to a shared map, into competing calls for action. I was seriously worried that if I didn’t interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.
The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don’t independently endorse moralizing.
o
hmmm, well i gotta chew on that more but
Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an ‘advocate’ for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less ‘mob violence’ energy from her and … maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn’t seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.)
I’m a bit worried about the way Scott’s original take may have pulled us towards a shared map too quickly. There’s also a general anti-jessicata vibe I’m getting from ‘the room’ but it’s non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness and to note I am with you in spirit, not an attempt to add more politics or fighting.
I was seriously worried that if I didn’t interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.
Hmmmm I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you’re being a good role model? If you’re standing for what’s right, it can inspire people into also doing the right thing. And if no one follows you, you accept that as the outcome; rather than trying to ‘make sure’ something happens?
Attachment to an outcome (like urgently trying to avoid ‘opportunities being permanently destroyed’) seems like it subtly disempowers people and perpetuates more of the pattern that I think we both want less of in the world? Checking to see where a disagreement might be found…
I think it seems hard to find a disagreement because we don’t disagree about much here.
Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an ‘advocate’ for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less ‘mob violence’ energy from her
Aella was being basically cooperative in revealing some details about her motives, as was Logan. But that behavior is only effectively cooperative if people can use that information to build shared maps. I tried to do that in my replies, albeit imperfectly & in a way that picked a bit more of a fight than I ideally would have.
I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you’re being a good role model?
At leisure, I do this. I’m working on a blog post trying to explain some of the structural factors that cause orgs like Leverage to go wrong in the way Zoe described. I’ve written extensively about both scapegoating and mind control outside the context of particular local conflicts, and when people seem like they’re in a helpable state of confusion I try to help them. I spent half an hour today using a massage gun on my belly muscles, which improved my reading comprehension of your comment and let me respond to it more intelligently.
But I’m in an adversarial situation. There are optimizing processes trying to destroy what I’m trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence.
It seems like you’re recommending that I build new capacities instead of defending old ones. If I’m deciding between those, I shouldn’t always get either answer. Instead, for any process damaging me, I should compare these two quantities:
(A) The cost of replacement—how much would it cost me to repair the damage or build an equivalent amount of capacity elsewhere?
(B) The cost of preventing the damage.
I should work on prevention when B<A, and building when A>B.
Since I expect my adversaries to make use of resources they seize to destroy more of what I care about, I need to count that towards the total expected damage caused (and therefore the cost of replacement).
If I’d been able to costlessly pause the world for several hours to relax and think about the problem, I would almost certainly have been able to write a better reply to Aella, one that would score better on the metric you’re proposing, while perhaps still accomplishing my “defense” goals.
I’m taking Tai Chi lessons in large part because I think ability to respond to fights without getting triggered is a core bottleneck for me, so I’m putting many hours of my time into being able to perform better on that metric. But I’m not better yet, and I’ve got to respond to the situations I’m in now with the abilities I’ve got now.
Well I feel somewhat more relaxed now, seeing that you’re engaging in a pretty open and upfront manner. I like Tai Chi :)
The main disagreement I see is that you are thinking strategically and in a results-oriented fashion about actions you should take; you’re thinking about things in terms of resource management and cost-benefit analysis. I do not advocate for that. Although I get that my position is maybe weird?
I claim that kind of thinking turns a lot of situations into finite games. Which I believe then contributes to life-ending / world-ending patterns.
…
But maybe a more salient thing: I don’t think this situation is quite as adversarial as you’re maybe making it out to be? Or like, you seem to be adding a lot to an adversarial atmosphere, which might be doing a fair amount of driving towards more adversarial dynamics in the group in general.
I think you and I are not far apart in terms of values, and so … I kind of want to help you? But also … if you’re attached to certain outcomes being guaranteed, that’s gonna make it hard…
I don’t understand where guarantees came into this. I don’t understand how I could answer a question of the form “why did you do X rather than Y” without making some kind of comparison of the likely outcomes of X and Y.
I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that’s part of what turned you off from the idea.
On the other hand, there’s a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can’t be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.
It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.
Generalizing from your objection to thinking about things in terms of resource management and cost-benefit analysis and your reaction to Eli’s summary of Michael and Spencer’s podcast, it seems like you’re experiencing a strong aversion (though not an infinitely strong one, since you said you might try listening to the podcast) to assimilating information about conflict or resource constraints, which will make it hard for you to understand behaviors determined by conflicts or resource constraints, which is a LOT of behavior.*
If you can point out specific mistakes I’m making, or at least try to narrow down your sense that I’m falsely assuming adversariality, we can try to discuss it.
But not all. Sexual selection seems like a third thing, though it might only common because it helps evolution find solutions to the other two—it would be surprising to see a lot of sexual selection across many species on a mature planet if it didn’t pay rent somehow.
Uhhh sorry, the thing about ‘guarantees’ was probably a mis-speak.
For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.)
I have since updated to realize how that way of thinking was flawed and dissociated from reality.
I don’t understand how I could answer a question of the form “why did you do X rather than Y” without making some kind of comparison of the likely outcomes of X and Y.
I wrote a whole response to this part, but … maybe I’m missing you.
Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV.
“Since I expect my adversaries to make use of resources they seize to destroy more of what I care about,” “But I’m in an adversarial situation. There are optimizing processes trying to destroy what I’m trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence.”
The thing I should have said… was not about the strategy subplot, sorry, … rather, I have an objection to the seeming endorsement of acting from a fear-aligned place. Maybe I was acting out of fear myself… and failed to name the true objection.
…
Those above quotes are the strongest evidence I have that you’re assuming adversarial-ness in the situation, and I do not currently know why you believe those quoted statements. Like the phrase about ‘adversaries’ sounds like you’re talking about theoretical ghosts to me. But maybe you have real people in mind.
the phrase about ‘adversaries’ sounds like you’re talking about theoretical ghosts to me. But maybe you have real people in mind.
I’m talking about optimizing processes coordinating with copies of themselves, distributed over many people. My blog post Civil Law and Political Drama is a technically precise description of this, though Towards optimal play as Villager in a mixed game adds some color that might be helpful. I don’t think my interests are opposed to the autonomous agency of almost anyone. I do think that some common trigger/trauma behavior patterns are coordinating against autonomous human agency.
The gaming detail helps me understand where you’re coming from here. I don’t think the right way to manage my resource constraints looks very much like playing a game of MtG. I am in a much higher-dimensional environment where most of my time should be spent playing/exploring, or resolving tension patterns that impede me from playing/exploring. My endorsed behavior pattern looks a little more like the process of becoming a good MtG player, or discovering that MtG is the sort of thing I want to get good at. (Though empirically that’s not a game it made sense to me to invest in becoming good at—I chose Tai Chi instead for reasons!)
rather, I have an objection to the seeming endorsement of acting from a fear-aligned place.
I endorse using the capacities I already have, even when those capacities are imperfect.
When responding to social conflict, it would almost always be more efficient and effective for me to try to clarify things out of a sense of open opportunity, than from a fear-based motive. This can be true even when a proper decision-theoretic model the situation would describe it as an adversarial one with time pressure; I might still protect my interests better by thinking in a free and relaxed way about the problem, than tensing up like a monkey facing a physical threat.
But a relaxed attitude is not always immediately available to me, and I don’t think I want to endorse always taking the time to detrigger before responding to something in the social domain.
Part of loving and accepting human beings as they are, without giving up on intention to make things better, is appreciating and working with the benefits people produce out of mixed motives. There’s probably some irrational fear-based motivation in Elon Musk’s and Jeff Bezos’s work ethic, and maybe they’d have found more efficient and effective ways to help the world if their mental health were better, but I’m really, really glad I get to use Amazon, and that Tesla and SpaceX and Starlink exist, and it’s not clear to me that I’d want to advise younger versions of them to spend a lot of time working on themselves first. That seems like making purity the enemy of the good.
optimizing processes coordinating with copies of themselves, distributed over many people
Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?
This question seems central to me because the poison I detect in Vassar-esque-speak is
a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of ‘hidden danger’ or ‘large demonic forces’ into his theories and way of speaking about things. I’m worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it’s more true. It makes people want to listen to him for long periods of time, but I don’t sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.)
I guess I’m claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power.
b) Reifying these “optimizing processes coordinating” together, maybe “against autonomous human agency” or whatever… seems toxic and harmful for a human mind that takes these very seriously. Unless it comes with ample antidote in the form of (in my world anyway) a deep spiritual compassion / faith and a wisdom-oriented understanding of everyone’s true nature, among other things in this vein. But I don’t detect Vassar is offering this antidote, so it just feels like poison to me. One might call this poison a deep cynicism, lack of faith / trust, a flavor of nihilism, or “giving into the dark side.”
I do believe Vassar might, in an important sense, have a lot of faith in humanity… but nonetheless, his way of expressing gives off a big stench of everything being somehow tainted and bad. And the faith is not immediately detectable from listening to him, nor do I sense his love.
I kind of suspect that there’s some kind of (adversarial) optimization process operating through his expression, and he seems to have submitted to this willingly? And I am curious about what’s up with that / whether I’m wrong about this.
Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?
Mostly just by trying to think about this stuff carefully, and check whether my responses to it add up & seem constructive. I seem to have been brought up somehow with a deep implicit faith that any internal problem I have, I can solve by thinking about—i.e. that I don’t have any internal infohazards. So, once I consciously notice the opportunity, it feels safe to be curious about my own fear, aggression, etc. It seems like many other people don’t have this faith, which would make it harder for them to solve this class of problem; they seem to think that knowing about conflicts they’re engaged in would get them hurt by making them blameworthy; that looking the thing in the face would mark them for destruction.
My impression is that insofar as I’m paranoid, this is part of the adversarial process I described, which seems to believe in something like ontologically fundamental threats that can’t be reduced to specific mechanisms by which I might be harmed, and have to be submitted to absolutely. This model doesn’t stand up to a serious examination, so examining it honestly tends to dissolve it.
I’ve found psychedelics helpful here. Psilocybin seems to increase the conscious salience of fear responses, which allows me to analyze them. In one of my most productive shrooms trips, I noticed that I was spending most of my time pretending to be a reasonable person, under the impression that an abstract dominator wouldn’t allow me to connect with other people unless I passed as a simulacrum of a rational agent. I noticed that it didn’t feel available to just go to the other room and ask my friends for cuddles because I wanted to, and I considered maybe just huddling under the blankets scared in my bedroom until the trip ended and I became a simulacrum again. Then I decided I had no real incentive do to this, and plenty of incentive to go try to interact with my friends without pretending to be a person, so I did that and it worked.
THC seems to make paranoid thoughts more conscious, which allows me to consciously work through their implications and decide whether I believe them.
I agree that stories with a dramatic villain seem more memetically fit and less helpful, and I avoid them when I notice the option to.
Thanks for your level-headed responses. At this point, I have nothing further to talk about on the object-level conversation (but open to anything else you want to discuss).
For information value, I do want to flag that…
I’m noticing an odd effect from talking with you. It feels like being under a weighted blanket or a ‘numbing’ effect. It’s neither pleasant nor unpleasant.
My sketchpad sense of it is: Leaning on the support of Reason. Something wants me to be soothed, to be reassured, that there is Reasonableness and Order, and it can handle things. That most things can be Solved with … correct thinking or conceptualization or model-building or something.
So, it’s a projection and all, but I don’t trust this “thing” whatever it is, much. It also seems to have many advantages. And it may make it pretty hard for me to have a fully alive and embodied conversation with you.
Curious if any of this resonates with you or with anyone else’s sense of you, or if I’m off the mark. But um also this can be ignored or taken offline as well, since it’s not adding to the overall conversation and is just an interpersonal thing.
I did feel inhibited from having as much fun as I’d have liked to in this exchange because it seemed like while you were on the whole trying to make a good thing happen, you were somewhat scared in a triggered and triggerable way. This might have caused the distortion you’re describing. Helpful and encouraging to hear that you picked up on that and it bothered you enough to mention.
Your response here is really perplexing to me and didn’t go in the direction I expected at all. I am guessing there’s some weird communication breakdown happening. ¯\_(ツ)_/¯ I guess all I have left is: I care about you, I like you, and I wish well for you. <3
It seems like you’re having difficulty imagining that I’m responding to my situation as I understand it, and I don’t know what else you might think I’m doing.
I read the comment you’re responding to as suggesting something like “your impression of Unreal’s internal state was so different from her own experience of her internal state that she’s very confused”.
Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it’s a move to suppress imperfectly expressed criticism.
The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don’t independently endorse moralizing. While this probably isn’t the best thing I could do if I were perfectly poised, I don’t think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.
You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that’s not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn’t, which is important information to have! Occasionally bright young people with a moral compass get in touch with me because they can see that I’m conspicuously behaving in a not-ethically-backwards way in proximity to something interesting but sketchy that they were considering getting involved with. Having clear examples to point to is helpful, and confrontation produces clear examples.
A contributing factor is that I (and I think Jessica too) felt time pressure here because it seems to me like there is an attempt to build social momentum against a specific target, which transforms complaints from complementary contributions to a shared map, into competing calls for action. I was seriously worried that if I didn’t interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.
o
hmmm, well i gotta chew on that more but
Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an ‘advocate’ for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less ‘mob violence’ energy from her and … maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn’t seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.)
I’m a bit worried about the way Scott’s original take may have pulled us towards a shared map too quickly. There’s also a general anti-jessicata vibe I’m getting from ‘the room’ but it’s non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness and to note I am with you in spirit, not an attempt to add more politics or fighting.
Hmmmm I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you’re being a good role model? If you’re standing for what’s right, it can inspire people into also doing the right thing. And if no one follows you, you accept that as the outcome; rather than trying to ‘make sure’ something happens?
Attachment to an outcome (like urgently trying to avoid ‘opportunities being permanently destroyed’) seems like it subtly disempowers people and perpetuates more of the pattern that I think we both want less of in the world? Checking to see where a disagreement might be found…
I think it seems hard to find a disagreement because we don’t disagree about much here.
Aella was being basically cooperative in revealing some details about her motives, as was Logan. But that behavior is only effectively cooperative if people can use that information to build shared maps. I tried to do that in my replies, albeit imperfectly & in a way that picked a bit more of a fight than I ideally would have.
At leisure, I do this. I’m working on a blog post trying to explain some of the structural factors that cause orgs like Leverage to go wrong in the way Zoe described. I’ve written extensively about both scapegoating and mind control outside the context of particular local conflicts, and when people seem like they’re in a helpable state of confusion I try to help them. I spent half an hour today using a massage gun on my belly muscles, which improved my reading comprehension of your comment and let me respond to it more intelligently.
But I’m in an adversarial situation. There are optimizing processes trying to destroy what I’m trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence.
It seems like you’re recommending that I build new capacities instead of defending old ones. If I’m deciding between those, I shouldn’t always get either answer. Instead, for any process damaging me, I should compare these two quantities:
(A) The cost of replacement—how much would it cost me to repair the damage or build an equivalent amount of capacity elsewhere?
(B) The cost of preventing the damage.
I should work on prevention when B<A, and building when A>B.
Since I expect my adversaries to make use of resources they seize to destroy more of what I care about, I need to count that towards the total expected damage caused (and therefore the cost of replacement).
If I’d been able to costlessly pause the world for several hours to relax and think about the problem, I would almost certainly have been able to write a better reply to Aella, one that would score better on the metric you’re proposing, while perhaps still accomplishing my “defense” goals.
I’m taking Tai Chi lessons in large part because I think ability to respond to fights without getting triggered is a core bottleneck for me, so I’m putting many hours of my time into being able to perform better on that metric. But I’m not better yet, and I’ve got to respond to the situations I’m in now with the abilities I’ve got now.
Well I feel somewhat more relaxed now, seeing that you’re engaging in a pretty open and upfront manner. I like Tai Chi :)
The main disagreement I see is that you are thinking strategically and in a results-oriented fashion about actions you should take; you’re thinking about things in terms of resource management and cost-benefit analysis. I do not advocate for that. Although I get that my position is maybe weird?
I claim that kind of thinking turns a lot of situations into finite games. Which I believe then contributes to life-ending / world-ending patterns.
…
But maybe a more salient thing: I don’t think this situation is quite as adversarial as you’re maybe making it out to be? Or like, you seem to be adding a lot to an adversarial atmosphere, which might be doing a fair amount of driving towards more adversarial dynamics in the group in general.
I think you and I are not far apart in terms of values, and so … I kind of want to help you? But also … if you’re attached to certain outcomes being guaranteed, that’s gonna make it hard…
I don’t understand where guarantees came into this. I don’t understand how I could answer a question of the form “why did you do X rather than Y” without making some kind of comparison of the likely outcomes of X and Y.
I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that’s part of what turned you off from the idea.
On the other hand, there’s a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can’t be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.
It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.
Generalizing from your objection to thinking about things in terms of resource management and cost-benefit analysis and your reaction to Eli’s summary of Michael and Spencer’s podcast, it seems like you’re experiencing a strong aversion (though not an infinitely strong one, since you said you might try listening to the podcast) to assimilating information about conflict or resource constraints, which will make it hard for you to understand behaviors determined by conflicts or resource constraints, which is a LOT of behavior.*
If you can point out specific mistakes I’m making, or at least try to narrow down your sense that I’m falsely assuming adversariality, we can try to discuss it.
But not all. Sexual selection seems like a third thing, though it might only common because it helps evolution find solutions to the other two—it would be surprising to see a lot of sexual selection across many species on a mature planet if it didn’t pay rent somehow.
Uhhh sorry, the thing about ‘guarantees’ was probably a mis-speak.
For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.)
I have since updated to realize how that way of thinking was flawed and dissociated from reality.
I wrote a whole response to this part, but … maybe I’m missing you.
Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV.
The thing I should have said… was not about the strategy subplot, sorry, … rather, I have an objection to the seeming endorsement of acting from a fear-aligned place. Maybe I was acting out of fear myself… and failed to name the true objection.
…
Those above quotes are the strongest evidence I have that you’re assuming adversarial-ness in the situation, and I do not currently know why you believe those quoted statements. Like the phrase about ‘adversaries’ sounds like you’re talking about theoretical ghosts to me. But maybe you have real people in mind.
I’m curious if you want to elaborate.
I’m talking about optimizing processes coordinating with copies of themselves, distributed over many people. My blog post Civil Law and Political Drama is a technically precise description of this, though Towards optimal play as Villager in a mixed game adds some color that might be helpful. I don’t think my interests are opposed to the autonomous agency of almost anyone. I do think that some common trigger/trauma behavior patterns are coordinating against autonomous human agency.
The gaming detail helps me understand where you’re coming from here. I don’t think the right way to manage my resource constraints looks very much like playing a game of MtG. I am in a much higher-dimensional environment where most of my time should be spent playing/exploring, or resolving tension patterns that impede me from playing/exploring. My endorsed behavior pattern looks a little more like the process of becoming a good MtG player, or discovering that MtG is the sort of thing I want to get good at. (Though empirically that’s not a game it made sense to me to invest in becoming good at—I chose Tai Chi instead for reasons!)
I endorse using the capacities I already have, even when those capacities are imperfect.
When responding to social conflict, it would almost always be more efficient and effective for me to try to clarify things out of a sense of open opportunity, than from a fear-based motive. This can be true even when a proper decision-theoretic model the situation would describe it as an adversarial one with time pressure; I might still protect my interests better by thinking in a free and relaxed way about the problem, than tensing up like a monkey facing a physical threat.
But a relaxed attitude is not always immediately available to me, and I don’t think I want to endorse always taking the time to detrigger before responding to something in the social domain.
Part of loving and accepting human beings as they are, without giving up on intention to make things better, is appreciating and working with the benefits people produce out of mixed motives. There’s probably some irrational fear-based motivation in Elon Musk’s and Jeff Bezos’s work ethic, and maybe they’d have found more efficient and effective ways to help the world if their mental health were better, but I’m really, really glad I get to use Amazon, and that Tesla and SpaceX and Starlink exist, and it’s not clear to me that I’d want to advise younger versions of them to spend a lot of time working on themselves first. That seems like making purity the enemy of the good.
Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?
This question seems central to me because the poison I detect in Vassar-esque-speak is
a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of ‘hidden danger’ or ‘large demonic forces’ into his theories and way of speaking about things. I’m worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it’s more true. It makes people want to listen to him for long periods of time, but I don’t sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.)
I guess I’m claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power.
b) Reifying these “optimizing processes coordinating” together, maybe “against autonomous human agency” or whatever… seems toxic and harmful for a human mind that takes these very seriously. Unless it comes with ample antidote in the form of (in my world anyway) a deep spiritual compassion / faith and a wisdom-oriented understanding of everyone’s true nature, among other things in this vein. But I don’t detect Vassar is offering this antidote, so it just feels like poison to me. One might call this poison a deep cynicism, lack of faith / trust, a flavor of nihilism, or “giving into the dark side.”
I do believe Vassar might, in an important sense, have a lot of faith in humanity… but nonetheless, his way of expressing gives off a big stench of everything being somehow tainted and bad. And the faith is not immediately detectable from listening to him, nor do I sense his love.
I kind of suspect that there’s some kind of (adversarial) optimization process operating through his expression, and he seems to have submitted to this willingly? And I am curious about what’s up with that / whether I’m wrong about this.
Mostly just by trying to think about this stuff carefully, and check whether my responses to it add up & seem constructive. I seem to have been brought up somehow with a deep implicit faith that any internal problem I have, I can solve by thinking about—i.e. that I don’t have any internal infohazards. So, once I consciously notice the opportunity, it feels safe to be curious about my own fear, aggression, etc. It seems like many other people don’t have this faith, which would make it harder for them to solve this class of problem; they seem to think that knowing about conflicts they’re engaged in would get them hurt by making them blameworthy; that looking the thing in the face would mark them for destruction.
My impression is that insofar as I’m paranoid, this is part of the adversarial process I described, which seems to believe in something like ontologically fundamental threats that can’t be reduced to specific mechanisms by which I might be harmed, and have to be submitted to absolutely. This model doesn’t stand up to a serious examination, so examining it honestly tends to dissolve it.
I’ve found psychedelics helpful here. Psilocybin seems to increase the conscious salience of fear responses, which allows me to analyze them. In one of my most productive shrooms trips, I noticed that I was spending most of my time pretending to be a reasonable person, under the impression that an abstract dominator wouldn’t allow me to connect with other people unless I passed as a simulacrum of a rational agent. I noticed that it didn’t feel available to just go to the other room and ask my friends for cuddles because I wanted to, and I considered maybe just huddling under the blankets scared in my bedroom until the trip ended and I became a simulacrum again. Then I decided I had no real incentive do to this, and plenty of incentive to go try to interact with my friends without pretending to be a person, so I did that and it worked.
THC seems to make paranoid thoughts more conscious, which allows me to consciously work through their implications and decide whether I believe them.
I agree that stories with a dramatic villain seem more memetically fit and less helpful, and I avoid them when I notice the option to.
Thanks for your level-headed responses. At this point, I have nothing further to talk about on the object-level conversation (but open to anything else you want to discuss).
For information value, I do want to flag that…
I’m noticing an odd effect from talking with you. It feels like being under a weighted blanket or a ‘numbing’ effect. It’s neither pleasant nor unpleasant.
My sketchpad sense of it is: Leaning on the support of Reason. Something wants me to be soothed, to be reassured, that there is Reasonableness and Order, and it can handle things. That most things can be Solved with … correct thinking or conceptualization or model-building or something.
So, it’s a projection and all, but I don’t trust this “thing” whatever it is, much. It also seems to have many advantages. And it may make it pretty hard for me to have a fully alive and embodied conversation with you.
Curious if any of this resonates with you or with anyone else’s sense of you, or if I’m off the mark. But um also this can be ignored or taken offline as well, since it’s not adding to the overall conversation and is just an interpersonal thing.
I did feel inhibited from having as much fun as I’d have liked to in this exchange because it seemed like while you were on the whole trying to make a good thing happen, you were somewhat scared in a triggered and triggerable way. This might have caused the distortion you’re describing. Helpful and encouraging to hear that you picked up on that and it bothered you enough to mention.
Your response here is really perplexing to me and didn’t go in the direction I expected at all. I am guessing there’s some weird communication breakdown happening. ¯\_(ツ)_/¯ I guess all I have left is: I care about you, I like you, and I wish well for you. <3
It seems like you’re having difficulty imagining that I’m responding to my situation as I understand it, and I don’t know what else you might think I’m doing.
I read the comment you’re responding to as suggesting something like “your impression of Unreal’s internal state was so different from her own experience of her internal state that she’s very confused”.
I was relying on her self-reports, like https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#g9vLjj7rpGDH99adj