Hmm. Under your model, are there ways that parts gain/lose (steam/mindshare/something)?
AnnaSalamon
Does it feel to you as though your epistemic habits / self-trust / intellectual freedom and autonomy / self-honesty takes a hit here?
Fair point; I was assuming you had the capacity to lie/omit/deceive, and you’re right that we often don’t, at least not fully.
I still prefer my policy to the OPs, but I accept your argument that mine isn’t a simple Pareto improvement.
Still:I really don’t like letting social forces put “don’t think about X” flinches into my or my friends’ heads; and the OPs policy seems to me like an instance of that;
Much less importantly: as an intelligent/self-reflective adult, you may be better at hiding info if you know what you’re hiding, compared to if you have guesses you’re not letting yourself see, that your friends might still notice. (The “don’t look into dragons” path often still involves hiding info, since often your brain takes a guess anyhow, and that’s part of how you know not to look into this one. If you acknowledge the whole situation, you can manage your relationships consciously, including taking conscious steps to buy openness-offsets, stay freely and transparently friends where you can scheme out how.)
I don’t see advantage to remaining agnostic, compared to:
1) Acquire all the private truth one can.
Plus:
2) Tell all the public truth one is willing to incur the costs of, with priority for telling public truths about what one would and wouldn’t share (e.g. prioritizing to not pose as more truth-telling than one is).
--The reason I prefer this policy to the OP’s “don’t seek truth on low-import highly-politicized matters” is that I fear not-seeking-truth begets bad habits. Also I fear I may misunderstand how important things are if I allow politics to influence which topics-that-interest-my-brain I do/don’t pursue, compared to my current policy of having some attentional budget for “anything that interests me, whether or not it seems useful/virtuous.”
Yes, this is a good point, relates to why I claimed at top that this is an oversimplified model. I appreciate you using logic from my stated premises; helps things be falsifiable.
It seems to me:
Somehow people who are in good physical health wake up each day with a certain amount of restored willpower. (This is inconsistent with the toy model in the OP, but is still my real / more-complicated model.)
Noticing spontaneously-interesting things can be done without willpower; but carefully noticing superficially-boring details and taking notes in hopes of later payoff indeed requires willpower, on my model. (Though, for me, less than e.g. going jogging requires.)
If you’ve just been defeated by a force you weren’t tracking, that force often becomes spontaneously-interesting. Thus people who are burnt out can sometimes take a spontaneous interest in how willpower/burnout/visceral motivation works, and can enjoy “learning humbly” from these things.
There’s a way burnout can help cut through ~dumb/dissociated/overconfident ideological frameworks (e.g. “only AI risk is interesting/relevant to anything”), and make space for other information to have attention again, and make it possible to learn things not in one’s model. Sort of like removing a monopoly business from a given sector, so that other thingies have a shot again.
I wish the above was more coherent/model-y.
Thanks for asking. The toy model of “living money”, and the one about willpower/burnout, are meant to appeal to people who don’t necessarily put credibility in Rand; I’m trying to have the models speak for themselves; so you probably *are* in my target audience. (I only mentioned Rand because it’s good to credit models’ originators when using their work.)
Re: what the payout is:
This model suggests what kind of thing an “ego with willpower” is — where it comes from, how it keeps in existence:
By way of analogy: a squirrel is a being who turns acorns into poop, in such a way as to be able to do more and more acorn-harvesting (via using the first acorns’-energy to accumulate fat reserves and knowledge of where acorns are located).
An “ego with willpower”, on this model, is a ~being who turns “reputation with one’s visceral processes” into actions, in such a way as to be able to garner more and more “reputation with one’s visceral processes” over time. (Via learning how to nourish viscera, and making many good predictions.)
I find this a useful model.
One way it’s useful:
IME, many people think they get willpower by magic (unrelated to their choices, surroundings, etc., although maybe related to sleep/food/physiology), and should use their willpower for whatever some abstract system tells them is virtuous.
I think this is a bad model (makes inaccurate predictions in areas that matter; leads people to have low capacity unnecessarily).
The model in the OP, by contrast, suggests that it’s good to take an interest in which actions produce something you can viscerally perceive as meaningful/rewarding/good, if you want to be able to motivate yourself to take actions.
(IME this model works better than does trying to think in terms of physiology solely, and is non-obvious to some set of people who come to me wondering what part of their machine is broken-or-something such that they are burnt out.)
(Though FWIW, IME physiology and other basic aspects of well-being also has important impacts, and food/sleep/exercise/sunlight/friends are also worth attending to.)
Ayn Rand’s model of “living money”; and an upside of burnout
I mean, I see why a party would want their members to perceive the other party’s candidate as having a blind spot. But I don’t see why they’d be typically able to do this, given that the other party’s candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded. It isn’t enough to know there’s an incentive in one direction; there’s gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something. So, I’m somehow still not hearing a model that gives me this prediction.
You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.
But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan. Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.
Reasons I’m hopeful:
I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored. And the norms and memories of that community haven’t faded all the way.
Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs. It is mostly fear of not having a moral community she can be part of. Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.
Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.
But:
a) Tusan and Vusan are not *as* threatened by it as if Susan had e.g. been considering more directly whether Candidate X was good. I think.
b) Susan is at least partially compensated by her partial-risk-of-losing-Tusan-and-Vusan, by the hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
b2) Tusan and Vusan are maybe also a bit tempted by this, which on their simpler models (since they’re engaging with Susan’s thoughts only very loosely / from a distance, as they complain about Susan) renders as “maybe she can change some of the candidate X supporters, since she’s discussing how they got tricked”
c) There are maybe some remnant-norms within the larger (pre-scissored) community that can appreciate/welcome Susan and her efforts.
I’m not sure I’m thinking about this well, or explicating it well. But I feel there should be some unscissoring process?
I don’t follow this model yet. I see why, under this model, a party would want the opponent’s candidate to enrage people / have a big blind spot (and how this would keep the extremes on their side engaged), but I don’t see why this model would predict that they would want their own candidate to enrage people / have a big blind spot.
Thanks; I love this description of the primordial thing, had not noticed this this clearly/articulately before, it is helpful.
Re: why I’m hopeful about the available levers here:
I’m hoping that, instead of Susan putting primary focal attention on Robert (“how can he vote this way, what is he thinking?”), Susan might be able to put primary focal attention on the process generating the scissors statements: “how is this thing trying to trick me and Robert, how does it work?”
A bit like how a person watching a commercial for sugary snacks, instead of putting primary focal attention on the smiling person on the screen who seems to desire the snacks, might instead put primary focal attention on “this is trying to trick me.”
(My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with “what if candidate X is great.”
Or: by seeing themselves, and a voter for the other side, as co-victims of an optical illusion, designed to trick each of them into being unable to find another’s areas of true seeing. And by working together to figure out how the illusion works, while seeing it as a common enemy.
But my specific hypothesis here is that the illusion works by misconstruing the other voter’s “Robert can see a problem with candidate Y” as “Robert can’t see the problem with candidate X”, and that if you focus on trying to decode the first the illusion won’t kick in as much.
By parsing the other voter as “against X” rather than “for Y”, and then inquiring into how they see X as worth being against, and why, while trying really hard to play taboo and avoid ontological buckets.
Huh. Is your model is that surpluses are all inevitably dissipated in some sort of waste/signaling cascade? This seems wrong to me but also like it’s onto something.
I like your conjecture about Susan’s concern about giving Robert steam.
I am hoping that if we decode the meme structure better, Susan could give herself and Robert steam re: “maybe I, Susan, am blind to some thing, B, that matters” without giving steam to “maybe A doesn’t matter, maybe Robert doesn’t have a blind spot there.” Like, maybe we can make a more specific “try having empathy right at this part” request that doesn’t confuse things the same way. Or maybe we can make a world where people who don’t bother to try that look like schmucks who aren’t memetically savvy, or something. I think there might be room for something like this?
If we can get good enough models of however the scissors-statements actually work, we might be able to help more people be more in touch with the common humanity of both halves of the country, and more able to heal blind spots.
E.g., if the above model is right, maybe we could tell at least some people “try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X—and that you’re right about the problems with Y, but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X.”
We can build a useful genre-savviness about common/destructive meme patterns and how to counter them, maybe. LessWrong is sort of well-positioned to be a leader there: we have analytic strength, and aren’t too politically mindkilled.
Scissors Statements for President?
I don’t know the answer, but it would be fun to have a twitter comment with a zillion likes asking Sam Altman this question. Maybe someone should make one?
I’ve bookedmarked this; thank you; I expect to get use from this list.
“Global evaluation” isn’t exactly what I’m trying to posit; more like a “things bottom-out in X currency” thing.
Like, in the toy model about $ from Atlas Shrugged, an heir who spends money foolishly eventually goes broke, and can no longer get others to follow their directions. This isn’t because the whole economy gets together to evaluate their projects. It’s because they spend their currency locally on things again and again, and the things they bet on do not pay off, do not give them new currency.
I think the analog happens in me/others: I’ll get excited about some topic, pursue it for awhile, get back nothing, and decide the generator of that excitement was boring after all.