Writes Putanumonit.com and SecondPerson.dating. @yashkaf on Twitter.
Jacob Falkovich
I feel like I don’t have a good sense of what China is trying to do by locking down millions of people for weeks at a time and how they’re modeling this. Some possibilities:
They’re just looking to keep a lid on things until the Chinese New Year (2/1) and the Olympics (2/4 − 2⁄20) at which point they’ll relax restrictions and just try to flatten the top in each city.
They legit think they’re going to keep omicron contained forever (or until omicron-targeting vaccines?) and will lock down hard wherever it pops out.
No one thinks they’re not merely delaying the inevitable, but “zero COVID” is now the official party line and no one at any level of governance can ever admit it and so by the spring they’re likely to have draconian lockdowns and exponential omicron.
Ironically, if the original SARS-COV-2 looked like a bioweapon targeted at the west (which wasn’t disciplined enough about lockdowns), omicron really looks like a bioweapon targeted at China (which is too disciplined about even hopeless lockdowns).
I was in a few long-term relationship in my early twenties when I myself wasn’t mature/aware enough for selfless dating. Then, after a 4-year relationship that was very explicit-rules based had ended, I went on about 25 first dates in the space of about 1 year before meeting my wife. Basically all of those 25 didn’t work because of a lack of mutual interest, not because we both tried to make it a long-term thing but failed to hunt stag.
If I was single today, I would date not through OkCupid as I did back in 2014 but through the intellectual communities I’m part of now. And with the sort of women I would like to date in these communities I would certainly talk about things like selfless dating (and dating philosophy in general) on a first date. Of course, I am unusually blessed in the communities I’m part of (including Rationality).
A lot of my evidence comes from hearing other people’s stories, both positive and negative. I’ve been writing fairly popular posts on dating for half a decade now, and I’ve had both close friends and anonymous online strangers in the dozens share their dating stories and struggles with me. For people who seem generally in a good place to go in the selfless direction the main pitfalls seem to be insecurity spirals and forgetting to communicate.
The former is when people are unable to give their partner the benefit of the doubt on a transgression, which usually stems from their own insecurity. Then they act more selfishly themselves, which causes the partner to be more selfish in turn, and the whole thing spirals.
The latter is when people who hit a good spot stop talking about their wants and needs. As those change they end up with a stale model of each other. Then they inevitably end up making bad decisions and don’t understand why their idyll is deteriorating.
To address your general tone: I am lucky in my dating life, and my post (as I wrote in the OP itself) doesn’t by itself constitute enough evidence for an outside-view update that selfless relationships are better. If this speaks to you intuitively, hopefully this post is an inspiration. If it doesn’t, hopefully it at least informs you of an alternative. But my goal isn’t to prove anything to a rationalist standard, in part because I think this way of thinking is not really helpful in the realm of dating where every person’s journey must be unique.
As a note, I’ve spoken many times about the importance of having empathy for romanceless men because they’re a common punching bag and have written about incel culture specifically. The fact that the absolute worst and most aggravating commenters on my blog identify as incels doesn’t make me anti-incel, it just makes me anti those commenters.
I should’ve written “capitulated to consumerism” but “capitulate to capital” just sounds really cool if you say it out loud.
“Bitcoin” comes from the old Hebrew “Beit Cohen”, meaning “house of the priest” or “temple”. Jesus cleansed the temple in Jerusalem by driving out the money lenders. The implications of this on Bitcoin vis a vis interchangeable fiat currencies are obvious and need no elaboration.
The full text of John 2 proves this connection beyond any doubt. “𝘏𝘦 𝘴𝘤𝘢𝘵𝘵𝘦𝘳𝘦𝘥 𝘵𝘩𝘦 𝘤𝘩𝘢𝘯𝘨𝘦𝘳′𝘴 𝘤𝘰𝘪𝘯𝘴 𝘢𝘯𝘥 𝘰𝘷𝘦𝘳𝘵𝘩𝘳𝘦𝘸 𝘵𝘩𝘦𝘪𝘳 𝘵𝘢𝘣𝘭𝘦𝘴” (John 2:15) refers to overthrowing the database tables of the centralized ledger and “scattering” the record of transactions among decentralized nodes on the blockchain.
“𝘔𝘢𝘯𝘺 𝘱𝘦𝘰𝘱𝘭𝘦 𝘴𝘢𝘸 𝘵𝘩𝘦 𝘴𝘪𝘨𝘯𝘴 𝘩𝘦 𝘸𝘢𝘴 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘪𝘯𝘨 𝘢𝘯𝘥 𝘣𝘦𝘭𝘪𝘦𝘷𝘦𝘥 𝘪𝘯 𝘩𝘪𝘴 𝘯𝘢𝘮𝘦. 𝘉𝘶𝘵 𝘑𝘦𝘴𝘶𝘴 𝘸𝘰𝘶𝘭𝘥 𝘯𝘰𝘵 𝘦𝘯𝘵𝘳𝘶𝘴𝘵 𝘩𝘪𝘮𝘴𝘦𝘭𝘧 𝘵𝘰 𝘵𝘩𝘦𝘮.” (John 2:23-24) couldn’t be any clearer as to the identity of Satoshi Nakamoto.
And finally, “𝘏𝘦 𝘥𝘪𝘥 𝘯𝘰𝘵 𝘯𝘦𝘦𝘥 𝘢𝘯𝘺 𝘵𝘦𝘴𝘵𝘪𝘮𝘰𝘯𝘺 𝘢𝘣𝘰𝘶𝘵 𝘮𝘢𝘯𝘬𝘪𝘯𝘥, 𝘧𝘰𝘳 𝘩𝘦 𝘬𝘯𝘦𝘸 𝘸𝘩𝘢𝘵 𝘸𝘢𝘴 𝘪𝘯 𝘦𝘢𝘤𝘩 𝘱𝘦𝘳𝘴𝘰𝘯.” (John 2:25) explains that there’s no need for the testimony of any trusted counterparty when you can see what’s in each person’s submitted block of Bitcoin transactions.
And if you think of shorting Bitcoin, remember John 2:19: “𝘑𝘦𝘴𝘶𝘴 𝘢𝘯𝘴𝘸𝘦𝘳𝘦𝘥 𝘵𝘩𝘦𝘮, ‘𝘋𝘦𝘴𝘵𝘳𝘰𝘺 𝘵𝘩𝘪𝘴 𝘵𝘦𝘮𝘱𝘭𝘦, 𝘢𝘯𝘥 𝘐 𝘸𝘪𝘭𝘭 𝘳𝘢𝘪𝘴𝘦 𝘪𝘵 𝘢𝘨𝘢𝘪𝘯 𝘪𝘯 𝘵𝘩𝘳𝘦𝘦 𝘥𝘢𝘺𝘴.’”
The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.
Copying from my Twitter response to Eliezer:
Anil Seth usefully breaks down consciousness into 3 main components:
1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)
2. contents of consciousness (qualia — external, interoceptive, and mental)
3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective.
He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with using rubber arms and mirrors, narrative-self breaks with amnesia, 1st person perspective breaks in out-of-body experiences which can be induced in VR, even the core feeling of the reality of self can be meditated away.
Qualia such as pain are also very contextual, the same physical sensation can be interpreted positively in the gym or a BDSM dungeon and as acute suffering if it’s unexpected and believed to be caused by injury. Being a self, or thinking about yourself, is also just another perception — a product of your brain’s generative model of reality — like color or pain are. I believe enlightened monks who say they experience selfless bliss, and I think it’s equally likely that chickens experience selfless pain.
Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word “consciousness” to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering.
The “generalist” description is basically my dream job right until
>The team is in Berkeley, California, and team members must be here full-time.
Just yesterday I was talking to a friend who wants to leave his finance job to work on AI safety and one of his main hesitations is that whichever organization he joins will require him to move to the Bay. It’s one thing to leave a job, it’s another to leave a city and a community (and a working partner, and a house, and a family...)
This also seems somewhat inefficient in terms of hiring. There are many qualified AI safety researchers and Lightcone-aligned generalists in the Bay, but there are surely even more outside it. So all the Bay-based orgs are competing for the same people, all complaining about being talent-constrained above anything else. At the same time, NYC, Austin, Seattle, London, etc. are full of qualified people with nowhere to apply.
I’m actually not suggesting you should open this particular job to non-Berkeley people. I want to suggest something even more ambitious. NYC and other cities are crying out for a salary-paying organization that will do mission-aligned work and would allow people to change careers into this area without uprooting their entire lives, potentially moving on to other EA organizations later. Given that a big part of Lightcone’s mission is community building, having someone start a non-Bay office could be a huge contribution that will benefit the entire EA/Rationality ecosystem by channeling a lot of qualified people into it.
And if you decide to go that route you’ll probably need a generalist who knows people...
I’m not sure what’s wrong, it works for me. Maybe change the https to http?
https://quillette.com/2021/05/13/the-sex-negative-society/
Googling “sex negative society quillette” should bring it up in any case.
rationality is not merely a matter of divorcing yourself from mythology. Of course, doing so is necessary if we want to seek truth...
I think there’s a deep error here, one that’s also present in the sequences. Namely, the idea that “mythology mindset” is something one should or can just get rid of, a vestige of silly stories told by pre-enlightenment tribes in a mysterious world.
I think the human brain does “mythological thinking” all the time, and it serves an important individual function of infusing the world with value and meaning alongside the social function of binding a tribe together. Thinking that you can excise mythological thinking from your brain only blinds you to it. The paperclip maximizer is a mythos, and the work it does in your mind of giving shape and color to complex ideas about AGI is no different from the work Biblical stories do for religious people. “Let us for the purpose of thought experiment assume that in the land of Uz lived a man whose name was Job and he was righteous and upright...”
The key to rationality is recognizing this type of thinking in yourself and others as distinct from Bayesian thinking. It’s the latter that’s a rare skill that can be learned by some people in specialized dojos like LessWrong. When you really need to get the right answer to a reality-based question you can keep the mythological thinking from polluting the Bayesian calculation — if you’re trained at recognizing it and haven’t told yourself “I don’t believe in myths”.
PS5 scalpers redistribute consoles away from those willing to burn time to those willing to spend money. Normally this would be a positive — time burned is just lost, whereas the money is just transferred from Sony to the scalpers who wrote the quickest bot. However, you can argue that gaming consoles in particular are more valuable to people with a lot of spare time to burn than to people with day jobs and money!
Disclosure: I’m pretty libertarian and have a full-time job but because there weren’t any good exclusives in the early months I decided to ignore the scalpers. I followed https://twitter.com/PS5StockAlerts and got my console at base price in April just in time for Returnal. Returnal is excellent and worth getting the PS5 for even if costs you a couple of hours or an extra $100.
Empire State of Mind
I want to second Daniel and Zvi’s recommendation of New York culture as an advantage for Peekskill. An hour away from NYC is not so different from being in NYC — I’m in a pretty central part of Brooklyn and regularly commute an hour to visit friends uptown or further east in BK and Queens. An hour in traffic sucks, an hour on the train is pleasant. And being in NYC is great.
A lot of the Rationalist-adjacent friends I made online in 2020 have either moved to NYC in the last couple of months or are thinking about it, as rents have dropped up to 20% in some neighborhoods and everyone is eager to rekindle their social life. New York is also a vastly better dating market for male nerds given a slightly female-majority sex ratio and thousands of the smartest and coolest women on the planet as compared to the male-skewed and smaller Bay Area.
Peekskill is also 2 hours from Philly and 3 from Boston, which is not too much for a weekend trip. That could make it the Schelling point for East Coast megameetups/conferences/workshops since it’s as easy to get to as NYC and a lot cheaper to rent a giant AirBnB in.
Won’t Someone Think of the Children
I love living in Brooklyn, but the one thing that could make us move in the next year or two is a community of my tribe that are willing to help each other with childcare, from casual babysitting to homeschooling pods. I’m keenly following the news of where Rationalist groups are settling, especially those who plan to (like us) or already have kids. A critical mass of Rationalist parents in Peekskill may be enticing enough for us to move there, since we could have the combined benefits of living space, proximity to NYC, and the community support we would love.
I don’t think that nudgers are consequentialists who also try to accurately account for public psychology. I think 99% of the time they are doing something for non-consequentialist reasons, and using public psychology as a rationalization. Ezra Klein pretty explicitly cares about advancing various political factions above mere policy outcomes, IIRC on a recent 80,000 Hours podcast Rob was trying to talk about outcomes and Klein ignored him to say that it’s bad politics.
I understand, I think we have an honest disagreement here. I’m not saying that the media is cringe in an attempt to make it so, as a meta move. I honestly think that the current prestige media establishment is beyond reform, a pure appendage of power. It’s impact can grow weaker or stronger, but it will not acquire honesty as a goal (and in fact, seems to be giving up even on credibility).
In any case, this disagreement is beyond the scope of your essay. What I learn from it is to be more careful of calling things cringe or whatever in my own speech, and to see this sort of thing as an attack on the social reality plane rather than an honest report of objective reality.
Other people have commented here that journalism is in the business of entertainment, or in the business of generating clicks etc. I think that’s wrong. Journalism is in the business of establishing the narrative of social reality. Deciding what’s a gaffe and who’s winning, who’s “controversial” and who’s “respected”, is not a distraction from what they do. It’s the main thing.
So it’s weird to frame this is “politics is way too meta”. Too meta for whom? Politicians care about being elected, so everything they say is by default simulacrum level 3 and up. Journalists care about controlling the narrative, so everything they say is by default simulacrum level 3 and up. They didn’t aim at level 1 and miss, they only brush against level 1 on rare occasion, by accident.
Here are some quotes from our favorite NY Times article, Silicon Valley’s Safe Space:the right to discuss contentious issues
The ideas they exchanged were often controversial
even when those words were untrue or could lead to violence
sometimes spew hateful speech
step outside acceptable topics
turned off by the more rigid and contrarian beliefs
his influential, and controversial, writings
push people toward toxic beliefs
These aren’t accidental. Each one of the bolded words just means “I think this is bad, and you better follow me”. They’re the entire point of the article — to make it so that it’s social reality to think that Scott is bad.
So I think there are two takeaways here. One is for people like us, EAs discussing charity impact or Rationalists discussing life-optimization hacks. The takeaway for us is to spend less time writing about the meta and more about the object level. And then there’s a takeaway about them, journalists and politicians and everyone else who lives entirely in social reality. And the takeaway is to understand that almost nothing they say is about objective reality, and that’s unlikely to change.
- Monastery and Throne by 6 Apr 2021 19:00 UTC; 115 points) (
- “Objective vs Social Reality” vs “Simulacra 1/3″ by 17 Mar 2021 22:15 UTC; 27 points) (
- 7 Apr 2021 4:00 UTC; 23 points) 's comment on Monastery and Throne by (
- 18 Mar 2021 22:17 UTC; 5 points) 's comment on “Objective vs Social Reality” vs “Simulacra 1/3″ by (
I agree that advertising revenue is not an immediate driving force, something like “justifying the use of power by those in power” is much closer to it and advertising revenue flows downstream from that (because those who are attracted to power read the Times).
I loved the rest of Viliam’s comment though, it’s very well written and the idea of the eigen-opinion and being constrained by the size of your audience is very interesting.
Here’s my best model of the current GameStop situation, after nerding out about it for two hours with smart friends. If you’re enjoying the story as a class warfare morality play you can skip this, since I’ll mostly be talking finance. I may all look really dumb or really insightful in the next few days, but this is a puzzle I wanted to figure out. I’m making this public so posterity can judge my epistemic rationality skillz — I don’t have a real financial stake either way.
Summary: The longs are playing the short game, the shorts are playing the long game.
At $300, GameStop is worth about $21B. A month ago it was worth $1B, so there’s $20B at stake between the long-holders and short sellers.
Who’s long right now? Some combination of WSBers on a mission, FOMOists looking for a quick buck, and institutional money (i.e., other hedge funds). The WSBers don’t know fear, only rage and loss aversion. A YOLOer who bought at $200 will never sell at $190, only at $1 or the moon. FOMOists will panic but they’re probably a majority and today’s move shook them off. The hedgies care more about risk, they may hedge with put options or trust that they’ll dump the stock faster than the retail traders if the line breaks.
The interesting question is who’s short. Shorts can probably expect to need a margin equal to ~twice the current share price, so anyone who shorted too early or for 50% of their bankroll (like Melvin and Citron) got squeezed out already. But if you shorted at $200 and for 2% of your bankroll you can hold for a long time. The current borrowing fee is 31% APR, or just 0.1% a day. I think most of the shorts are in the latter category, here’s why:
Short interest has stayed at 71M shares even as this week saw more than 500M shares change hands. I think this means that new shorts are happy to take the places of older shorts who cash out, they’re only constrained by the fact that ~71M are all that’s available to borrow. Naked shorts aren’t really a thing, forget about that. So everyone short $GME now is short because they want to be, if they wanted to get out they could. In a normal short squeeze the available float is constrained, but this hasn’t really happened with $GME.
WSBers can hold the line but can’t push higher without new money that would take some of these 71M shares out of borrowing circulation or who will push the price up so fast the shorts will get margin-called or panic. For the longs to win, they probably need something dramatic to happen soon.
One dramatic thing that could happen is that people who sold the huge amount of call options expiring Friday aren’t already hedged and will need to buy shares to deliver. It’s unclear if that’s realistic, most option sellers are market makers who don’t stay exposed for long. I don’t think there were options sold above the current price of $320, so there’s no gamma left to squeeze.
I think $GME getting taken off retail brokerages really hurt the WSBers. It didn’t cause panic, but it slowed the momentum they so dearly needed and scared away FOMOists. By the way, I don’t think brokers did it to screw with the small people, they’re their clients after all. It just became too expensive for brokerages to make the trade because they need to post clearing collateral for two days. They were dumb not to anticipate this, but I don’t think they were bribed by Citadel or anything.
For the shorts to win they just need to wait it out not get over-greedy. Eventually the longs will either get bored or turn on each other — with no squeeze this becomes just a pyramid scheme. If the shorts aren’t knocked out tomorrow morning by a huge flood of FOMO retail buys, I think they’ll win over the next weeks.
This is a self-review, looking back at the post after 13 months.
I have made a few edits to the post, including three major changes:
1. Sharpening my definition of what counts as “Rationalist self-improvement” to reduce confusion. This post is about improved epistemics leading to improved life outcomes, which I don’t want to conflate with some CFAR techniques that are basically therapy packaged for skeptical nerds.
2. Addressing Scott’s “counterargument from market efficiency” that we shouldn’t expect to invent easy self-improvement techniques that haven’t been tried.
3. Talking about selection bias, which was the major part missing from the original discussion. My 2020 post The Treacherous Path to Rationality is somewhat of a response to this one, concluding that we should expect Rationality to work mostly for those who self-select into it and that we’ll see limited returns to trying to teach it more broadly.
The past 13 months also provided more evidence in favor of epistemic Rationality being ever more instrumentally useful. In 2020 I saw a few Rationalist friends fund successful startups and several friends cross the $100k mark for cryptocurrency earnings. And of course, LessWrong led the way on early and accurate analysis of most COVID-related things. One result of this has been increased visibility and legitimacy, and of course another is that Rationalists have a much lower number of COVID cases than all other communities I know.
In general, this post is aimed at someone who discovered Rationality recently but is lacking the push to dive deep and start applying it to their actual life decisions. I think the main point still stands: if you’re Rationalist enough to think seriously about it, you should do it.
Trade off to a promising start :P
This is a useful clarification. I use “edge” normally to include both the difference in probability of winning and losing and the different payout ratios. I think this usage is intuitive: if you’re betting 5:1 on rolls of a six-sided die, no one would say they have a 66.7% “edge” in guessing that a particular number will NOT come up 5⁄6 of the time — it’s clear that the payout ratio offsets the probability ratio.
Anyway, I don’t want to clunk up the explanation so I just added a link to the precise formula on Wikipedia. If this essay gets selected on condition that I clarify the math, I’ll make whatever edits are needed.