Just want to echo: thanks for doing this. This is awesome.
shev
Your post got me thinking about some stuff I’ve been dealing with, and I think helped me make some progress on it almost instantly. I don’t think the mechanisms are quite the same, but thinking about your experience induced me to have useful realizations about myself. I’ll share in case it’s useful to someone else:
It sounds like your self-concept issue was rooted in “having a negative model of yourself deeply ingrained”, which induced neuroses in your thoughts/behaviors that attempted to compensate for it / search around for ways to convince yourself it wasn’t true. And that the ‘fix’, sorta, was revisiting where that model came from and studying it and realizing that it wasn’t fair or accurate and that the memories in which it was rooted could be reinterpreted or argued against.
I thought about this for a while, and couldn’t quite fit my own issues into the model. So instead I zoomed out a bit and tried this approach: it seems like the sensation of shame, especially when no one else is around, must be rooted in something else, and when I feel shame I ought to look closer and figure out why, as it’s a huge >>LOOK RIGHT HERE<< to a destructive loop of thoughts (destructive because, well, if I’m feeling shame about the same thing for years on end, and yet it’s not changing, clearly it’s not helping me in any way to feel that way—so I ought, for my health, to either fix it or stop feeling it).
[Aside: in my experience, the hallmark thought pattern of depression is loops: spirals of thoughts that are negative and cause anxiety / self-loathing, but don’t provide a fix and have no mechanism for ‘going away’, so they just continue to spiral, negatively affecting mood and self-esteem etc, and probably causing destructive behaviors that give you more to feel anxious / hateful about. And I’ve observed that it’s very hard to go through rational calculations with regard to yourself in private, for me at least, and so talking to people (therapists, friends, strangers, whatever) and being forced to vocalize and answer questions about your thought spirals can cause you to see logical ways to ‘step out of them’ that never seem clear when you’re just thinking by yourself. Or whatever—I could probably write about how this works for hours.]
So I looked closer at where my shame from, and found that it wasn’t that I had a negative self-concept on its own (something like “I am X”, where X is negative), but rather that it was that I was constantly seeing in my world reminders of someone I felt like I should have been, in a sense. I felt like I had been an extremely smart, high-potential kid growing up, but at some point, video game addiction + sleep deprivation + irresponsibility + depression had diverted me off that path, and ever since I have been constantly reminded of that fact and feel shame for not being that person. So I guess I had (have) a self-concept of ‘being a failed version of who I could have been’, or ‘having never reached my potential’.
For some concrete examples:
When I saw my reflection in things, I would criticize myself for seeming not-normal, goofy, or not.. like.. masculine enough? for a mid-20s male. not that I wanted to be, like, buff, but I want to be a person who wouldn’t strike others as goofy looking, but I always see my bad posture from computer use and my haircut that I’m never happy with, and get stuck in loops looking at myself in the mirror and trying to figure out what I need to work on to fix it (work out this or that muscle, do yoga, figure out how to maintain a beard, whatever).
A lot of times when I read really brilliant essays, on LW or other blogs or etc, about subjects I’m into, especially by autodidact/polymath types, I’d feel really bad because I felt like I could have been one of those people, but had failed to materialize. So I’d be reminded that I need to study more math, and write more, and read more books, and all these others things, in order to get there.
These are thoughts I have been having dozens of times a day.
The second big realization: that motivation borne out of shame is almost completely useless. Seeing your flaws and wanting to change them causes negative emotions in the moment, but it doesn’t really lead to action, ever. A person who feels bad about being lanky doesn’t often go to the gym, because that’s not coming from a positive place and the whole action is closely coupled to negativity and self-loathing. And a person feeling bad for not being a clever polymath doesn’t.. become one.. from negativity; that takes years of obsession and other behaviors that you can’t curate through self-loathing.
(Well, it’s possible that shame can induce motivation for immediate fixes, but I’m sure it doesn’t cause long-term changes. I suspect that requires a desire to change that comes from a positive, empowered mindset.)
I’m not entirely sure what the ‘permanent’ fix for this is—it doesn’t seem to be as simple as redefining my self-concept to not want to be these people. But realizing this was going on in this way seemed like a huge eye-opening realization and almost immediately changed how I was looking at my neurotic behaviors / shames, and I think it’s going to lead to progress. The next step, for now, I think, is focusing on mindfulness in an effort to become more able to control and ignore these neurotic shame feelings, now that I’ve convinced myself that I understand where they’re coming from, and that they’re unfair and irrational.
TLDR
feelings of shame / neurotic spirals = places to look closely at in your psyche. They’re probably directly related to self-concept issues.
it’s possible for negativity to come, rather than directly from your self-concept, from your concept of who you ‘should have’ or ‘could have been’.
shame-induced motivation is essentially useless. For me, at least. I’ve been trying to channel it into lifestyle changes for years to essentially 0 results.
Yeah, that’s the exact same conclusion I’m pushing here. That and “you should feel equipped to come to this conclusion even if you’re not an expert.” I know.. several people, and have seen more online (including in this comment section) who seem okay with “yeah, it’s negative one twelfth, isn’t that crazy?” and I think that’s really not ok.
My friend who’s in a physics grad program promised me that it does eventually show up in QFT, and apparently also in nonlinear dynamics. Good enough for me, for now.
The assumed opinions I’m talking about are not the substance of your argument; they’re things like “I think that most of these reactions are not only stupid, but they also show that American liberals inhabit a parallel universe”, and what is implied in the use of phrases like ‘completely hysterical’, ‘ridiculous’, ‘nonsensical’, ‘proposterous’, ‘deranged’, ‘which any moron could have done’, ‘basically a religion’, ‘disconnected from reality’, ‘save the pillar of their faith’, etc. You’re clearly not interested in discussion of your condemnation of liberals, and certainly not rational discussion. You take it as an obvious fact that they are all stupid, deranged morons.
So when you write “I’m also under no delusion that my post is going to have any effect on most of those who weren’t already convinced”, I think you are confused. People who don’t already agree with you won’t be convinced because you obviously disdain them and are writing with obviously massive bias against them. Not because their opinions are “basically a religion, which no amount of evidence can possibly undermine.”
I think your post would be much stronger if you just removed all your liberal-bashing entirely, quoted an example of someone saying hate crimes had gone up since trump’s election, and then did the research. I’m totally opposed to polemics because I think they have no good results. Especially the kind that is entirely pandering to one side and giving the finger to the other. (I also think you’re wildly incorrect about your understanding of liberals, as revealed by some of your weird stereotypes, but this is not the place to try to convince you otherwise.) But I guess if that’s the way people write in a certain community and you’re writing for that community, you may as well join in. I prefer to categorically avoid communities that communicate like that—I’ve never found anything like rational discussion in one.
I also think such obvious bias makes your writing weaker even for people on your side. It’s hard to take writing seriously that is clearly motivated by such an agenda and is clearly trying to get you to rally with it in your contempt for a common enemy.
It’s true that politics is generally discouraged around here. But, also—I’m the person who commented negatively on your post, and I want to point out that it wasn’t going to be well-received, even if politics was okay here. You wrote in a style that assumed a lot of opinions are held by your readers, without justification, and that tends to alienate anyone who disagrees with you. Moreover, you write about those opinions as if they are not just true but obviously true, which tends to additionally infuriate anyone who disagrees with you. So I think your post’s style was a specific example of the kind of ‘mind-killing’ that should be avoided.
I appreciate exhaustive research of any kind, and the body of your post was good for that. But the style of the frame around it made it clear that you had extremely low opinions of a large group of people and wanted to show it, and.. well, I personally don’t think you should write that way ever, but especially not for this forum.
With an opening like
The idea that liberal elites are disconnected from reality has been a major theme of post-election reflections. Nowhere is this more obvious than in academia, where Trump’s victory resulted in completely hysterical reactions.
It’s clear that this is written for people who already believe these things. The rest, unsurprisingly, confirms that. I thought LW tried to avoid politics? And, especially, pointless politically-motivated negativity. “liberal-bashing” isn’t very interesting, and I don’t think there’s a point in linking it on this site. Unfortunately downvoting is still disabled here.
I would like to remind people of some basic facts, which hopefully will bring them back to reality, although it probably won’t.
Either the author is writing for people who agree with them, in which case, petty jabs are just signaling, or they’re trying to convince people to agree with them, in which case petty jabs make them less convincing, not more.
Also, the author should probably give a source for the claim that there was unleashed a wave of hate crimes. I personally haven’t heard that said anywhere, in real life or online. “almost everyone on Facebook was apparently convinced that buckets of mostly unverified anecdotes” is useless. Sure, it’s okay to write about a phenomenon one personally observes but can’t put numbers on—but when the argument is “look how stupid these people are for a thing they did”, it’s important that everyone agrees they actually did it. Otherwise we can just invent actions for groups we don’t like and then start taking shots at them.
It doesn’t count in the discussions of coloring graphs, such as in the four color map theorem, and that’s the kind of math this is most similar to. So you really need to specify.
Are you just wondering what ‘pushing’ means in this context? Or speculating about the existence of anti-gravity?
I’m pretty sure that this is just interpreting as region of low density as ‘pushing’ because it ‘pulls less’ than a region of average density would.
This is similar to how electron ‘holes’ in a metal’s atomic lattice can be treated as positive particles.
Don’t you think there’s some value of doing a more controlled study of it?
No, because it’s not a possibility that when you thought you were doing math in the reals this whole time, you were actually doing math in the surreals. Using a system other than the normal one would need to be stated explicitly.
You had written
“I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with “rationality is about winning”.”
And I’m saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because “a community about rationality” is naturally compelled to participate in truth-seeking, because it’s useful and interesting to rationalists. But truth-seeking isn’t inherently what rationality is.
Rationality is conceptually related to fitness. That is, “making optimal plays” should be equivalent to maximizing fitness within one’s physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.
It’s irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they’d survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn’t survive better, but that wouldn’t be keeping the physical variables the same so it’s not interesting to point that out.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in ‘fitness’ by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
I didn’t know what “shared art” meant in the initial post, and I still don’t.
Interleaving isn’t really the right way of getting consistent results for summations. Formal methods like Cesaro Summation are the better way of doing things, and give the result 1⁄2 for that series. There’s a pretty good overview on this wiki article about summing 1-2+3-4.. .
I know about Cesaro and Abel summation and vaguely understand analytic continuation and regularization techniques for deriving results from divergent series. And.. I strongly disagree with that last sentence. As, well, explained with this post, I think statements like “1+2+3+...=-1/12” are criminally deceptive.
Valid statements that eliminate the confusion are things like “1+2+3...=-1/12+O(infinity)”, or “analytic_continuation(1+2+3+)=-1/12“, or “1#2#3=-1/12”, where # is a different operation that implies “addition with analytic continuation”, or “1+2+3 # −1/12”, where # is like = but implies analytic continuation. Or, for other series, “1-2+3-4… #1/4” where # means “equality with Abel summation”.
The massive abuse of notation in “1+2+3..=-1/12” combined with mathematicians telling the public “oh yeah isn’t that crazy but it’s totally true” basically amounts to gaslighting everyone about what arithmetic does and should be strongly discouraged.
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about. Rationality-the-mental-art is, I think, about “making optimal plays” at whatever you’re doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win). But the community is definitely not based around “we’re each trying to win on our own and maximize our own utility functions” or anything like that. The community is interested in truth seeking and exploring rationality and how to apply it and all of those things.
Evolution doesn’t really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
If you’re trying to win at “achieve X”, and lying is the best way to do that, then you can lie. If you’re trying to win at “achieve X while remaining morally upright, including not lying”, or whatever, then you don’t like. Choosing to lie or not parameterizes the game. In either game, there’s a “best way to play”, and rationality is the technique for finding it.
Of course it’s true that model-building may not be the highest return activity towards a particular goal. If you’re trying to make as much money as possible, you’ll probably benefit much more from starting a business and re-investing the profits asap and ignoring rationality entirely. But doing so with rational approaches will still beat doing so without rational approachs. If you don’t any particular goal, or you’re just generally trying to learn how to win more at things, or be generally more efficacious, then learning rationality abstractly is a good way to proceed.
“Spending time to learn rationality” certainly isn’t the best play towards most goals, but it appears to be a good one if you get high returns from it or if you have many long-term goals you don’t know how to work on. (That’s my feeling at least. I could be wrong, and someone who’s better at finding good strategies will end up doing better than me.)
In summary, “rationality is about winning” means if you’re put in situations where you have goals, rational approaches tend to win. Statistically. Like, it might take a long time before rational approaches win. There might not be enough time for it to happen. It’s the ’asymptotic behavior”.
An example: if everyone was caring a lot about chess, and your goal was to be the best at chess, you can get a lot of the way by playing a lot of chess. But if someone comes along who also played a lot of games, they might start beating. So you work to beat them, and they work to beat you, and you start training. Who eventually wins? Of course there are mental faculties, memory capabilities, maybe patience, emotional things at work. But the idea is, you’ll become the best player you can become given enough time (theoretically) via being maximally rational. If there are other techniques that are better than rationality, well, rationality will eventually find them—the whole point is that finding the best techniques is precisely rational. It doesn’t mean you will win, there are cosmic forces against you. It means you’re optimizing your ability to win.
It’s analogous to how, if a religion managed to find actually convincing proof of a divine force in the universe, that force would immediately be the domain of science. There are no observable phenomena that aren’t the domain of science. So the only things that can be religious are things you can’t possible prove occur. Equivalently, it’s always rational to use the best strategy. So if you found a new strategy, that would become the rationalists’ choice to. So the rationalist will do at least as well as you, and if you’re not jumping to better strategies when they come along, the rationalist will win. (On average, over all time.)
Interesting, I’ve never looked closely at these infinitely-long numbers before.
In the first example, It looks like you’ve described the infinite series 9(1+10+10^2+10^3...), which if you ignore radii of convergence is 9*1/(1-x) evaluated at x=10, giving 9/-9=-1. I assume without checking that this is what Cesaro or Abel summation of that series would give (which is the technical way to get to 1+2+3+4..=-1/12 though I still reject that that’s a fair use of the symbols ‘+’ and ‘=’ without qualification).
Re the second part: interesting. Nothing is immediately coming to mind.
Fixed the typo. Also changed the argument there entirely: I think that the easy reason to assume we’re talking about real numbers instead of rationals is just that that’s the default when doing math, not because 0.999… looks like a real number due to the decimal representation. Skips the problem entirely.
Well—I’m still getting the impression that you’re misunderstanding the point of the virtues, so I’m not sure I agree that we’re talking past each other. The virtues, as I read them, are describing characteristics of rational thought. It is not required that rational thinkers appear to behave rationally to others, or act according to the virtues, at all. Lying very well may be a good, or the best, play in a social situation.
Appearing rational may be a good play. Demonstrating rationality can cause people to trust you and your ability to make good decisions not swayed by whim, bias, or influence. But there are other effective social strategies (politicians, for instance, tend to get by much more on rhetoric than reasoning).
So if you’re talking about ‘rationality as winning’, these virtues are characteristics of the mental program you run to win better. They may or may not correlate with how rational you appear to others. If you’re trying to find ways to appear more rational, then certainly, look at the virtues as a list of “things to display”. But if you’re trying to behave rationally, ignore the display part and focus on how you reason when confronted with optimization problems in your life (in which ‘winning’ is ‘playing optimally’, or more optimally than you otherwise would).
They’re all sort of intrinsic to good reasoning, too, though in the Yudkowsky post this is heavily concealed under grandiloquent language.
I guess I’d put it like this: consider the simplistic model where human minds consist of:
a bundle of imperfect computing machinery that makes errors, is swayed by biases and emotional response, etc, and
a bundle of models for looking at the world that you get to make your decisions according to, of which several seem reasonable and all can change over time
And you’re tasked with trying to optimize the way you play in real-world situations. Well, the best players are the ones who optimize their computing machinery, use their machinery to correctly parse and sift through models, and accurately update their models to reflect the way the world appears to (probably) work, because they will over time come to the best determinations of plays in reality and thus, probabilistically, ‘win’.
So optimizing your imperfect computing machinery is “perfectionism”, “precision”, and “scholarship”, and the unnamed one which I would dub “playing to win” (letting winning be your metric, not your own reasoning). Correctly managing your models (according to, like, information theory, which should be optimal) is “lightness”, “relinquishment”, “evenness”, “empiricism”, and “simplicity”. And then, knowing that you don’t have all the models or that your models may yet change, and having, to play optimally anyway, you compute that you need to actively seek new models (‘curiosity’), play as though your data may still be wrong (‘humility’), and let the world challenge your models and give you data that your observations do not (‘argument’).
And the idea is that these abilities are intrinsic to winning, if this is a good approximation of humans (which I think it is). So they describe virtues of how humans manage this game of imperfect machinery and models, which may only correlate with external behavior.
Ah, of course, my mistake. I was trying to hand-wave an argument that we should be looking at reals instead of rationals (which isn’t inherently true once you already know that 0.999...=1, but seems like it should be before you’ve determined that). I foolishly didn’t think twice about what I had written to see if it made sense.
I still think it’s true that “0.999...” compels you to look at the definition of real numbers, not rationals. Just need to figure out a plausible sounding justification for that.
I don’t think this is quite right. In my experience, the sensation that someone is higher status than me induces a desperate desire to be validated by them, abstractly. It’s not the same as ‘gratitude’ or anything like that; it’s the desire to associate with them in order to acquire a specific pleasurable sensation—one of group membership, acceptance, and worth.