New York City
Solstice: December 14 [fb]
Megameetup: December 13-16 [fb]
Both: Sheraton Brooklyn New York Hotel
Signup: https://rationalistmegameetup.com/
New York City
Solstice: December 14 [fb]
Megameetup: December 13-16 [fb]
Both: Sheraton Brooklyn New York Hotel
Signup: https://rationalistmegameetup.com/
Law of Extremity does some weirder stuff...
Consider a gaussian trait with high levels stigmatized. From a careless observer’s perspective, +1s will be rare (somewhat hiding), +2s extremely rare (thoroughly hiding), +3s nonexistant (very hiding *and* rare to begin with) but +4s unable to hide and in fact talked about incessantly on the news / clickbait / juicy rumor mill. Which looks like there are two populations, a large left-skewed one and an entirely distinct but much smaller one. The trait at [-3,+1] and at +4 may not even look that qualitatively similar! So our observer uses a categorical model.
It’s the wrong model. The trait is gaussian. That’s the scenario.
So we have a category for people who are high in our trait with neither natural place nor social consensus on where draw a border line. This will be bad for all discourse on the subject.
San Francisco Bay
Dec 16
Registration: https://www.eventbrite.com/e/bay-area-winter-solstice-2023-tickets-721678057497
FB Event: https://fb.me/1QOVAoRFRHROxgK
New York City
Dec 9 with Megameetup Dec 8-11
Combined Registration: https://rationalistmegameetup.com/
Solstice FB Event: https://www.facebook.com/events/759149316041783
Megameetup FB Event: https://www.facebook.com/events/333410062694327
I object to the term “non-magical”. Bribes and intimidation are not magic.
The most obvious conspiracy, what I would consider the null hypothesis, involves one rich influential person who was worried about getting ratted on, one professional hitman, and one or two jailers who were willing to take bribes. And from the perspective of a jailer being offered a bribe, with vague threats if he refuses, someone who has gone yachting with a bunch of highly placed politicians is scary even if the jailer can’t fill in the details of the threat.
None of that is magic. None of it involves a vast web. None of it requires extraordinary sophistication. None of it requires implausible levels of loyalty to an org chart (unless you’re going to argue that the very existence of hitmen is implausible).
Somehow, whenever I hear the phrase “conspiracy theory”, out come the strawmen.
1.5 The officer within the CIA who investigated Epstein knew, but he got promoted based on how many agents he had and how useful they were, so he kept quiet. Had he turned Epstein in, he’d have gotten some kudos for that, but it wouldn’t have been as good a career move. Had he reported up the chain, his commanding officer might have decided to sacrifice the original officer’s career for greater justice, so he didn’t do that either. Whoever set up this incentive system didn’t anticipate this particular scenario.
This is the thing about conspiracy theories: they usually don’t require very much actual conspiring.
I suspect this is a lack of flexibility in Stockfish. It was designed (trained?) for normal equal-forces chess and can’t step back to think “How do I best work around this disadvantage I’ve been given?” I suspect something like AlphaZero, given time to play itself at a disadvantage, would do better. As would a true AGI.
That is literally true. The old HPMOR site was just there to host the book as cleanly as possible. Lesswrong is a discussion forum with a lot of functionality. You can host a book on a discussion forum, but it’ll never be as smooth.
I propose that “I don’t know” between fully co-operative rationalists is shorthand for “my knowledge is so weak that I expect you would find negative value in listening to it.” Note that this means whether I say “I don’t know” depends in part on my model of you.
For example, if someone who rarely dabbles in medicine asks me if I think a cure works, and I’ve only skimmed the paper that proposes it, I might well explain how low the prior is and how shaky this sort of research tends to be. If an expert asked me the same question, I’d say “I don’t know” because they already know all that and are asking if I have any unique insight, which I don’t.
Similarly, if someone asks how much a box weighs, and I’m 95% confident it’s between 10 and 50 pounds, I’ll say “I don’t know”, because that range is too wide to be useful for most purposes. But if they follow up with “I’m thinking of shipping it fedex which has a 70 pound maximum”, then I can answer “I’m 95% confident it’s less than 70 pounds.” Though if they also say that if the shipment doesn’t go smoothly the mafia will kill them, my answer is “the scale’s in the bathroom”, because now 95% confidence isn’t good enough.
This does mean that “I don’t know” is a valid answer if my knowledge is so uncompressible that it cannot be transmitted within your patience. I don’t have a good example for this, but I don’t see it as a problem.
New York / East Coast
Solstice:
December 10th, 6:15pm
Bruno Walter Auditorium, 111 Amsterdam Ave (between 64th and 65th streets, near the Lincoln Center stop on the 1 train)
Registration: https://forms.gle/fAFLWFCLm1pS1Hra7
Facebook Event: https://facebook.com/events/557544469714744
MegaMeetup:
December 9-12
Hoboken
Registration: https://rationalistmegameetup.com/
Facebook Event: https://www.facebook.com/events/1468622393619899
How big of a subunit were you able to get? Last I looked at mail-order dna, the affordable stuff was only a few hundred bases.
It is not clear to me what point you’re making with your examples. Have you written an object-level analysis of a failed LW conversation? I realize that doing that in the straightforward way would antagonize a lot of people, and I recognize that might not be worth it, but maybe there’s some clever workaround? Perhaps you could create a role account for your dark side, post the sort of things you think are welcomed here but shouldn’t be, confirm empirically that they are, then write a condemnation of those?
Less of a constraint if matters are arranged such that living in NYC is practical. Expensive, of course, but no worse than the Bay. It’s a long-ish commute, but not too terrible by mostly-empty train (the full trains will be running the opposite direction). Easier still if WFH a few days a week is supported.
This seems like a very confused way of thinking about earthquakes.
In the past month, there were 4 earthquakes associated with the Juan del Fuca subduction. All were around Richter 2.5 and no one cared.
While I suppose it’s possible for a fault to produce small and large earthquakes both more often than in between, this strikes me as rather unlikely. Generally an analysis of earthquake risk should begin be deciding what magnitude earthquakes to care about, and then calculate probabilities.
(When we say that the Seattle area is particularly at-risk, that’s because architecture standards there contain very little earthquake-resilience. Which may not be relevant here. The actual fault line is among the less active on the west coast of North America.)
I can more easily imagine worlds where some MIRI staff lived and worked in NYC itself, though I think MIRI’s first-pass goal would be to have as many staff as possible working in the Peekskill area.
You may be underestimating the mental health benefits of being immersed in a larger community. If you apply the “Comfort In. Dump Out” model of emotional support to the stress of MIRI, having strong relationships with people with less stressful lives is really important. If MIRIans are living in a little bubble with no one to dump on but each other, stress just builds.
Seeing as MIRIans will be working outside the city and having fun inside it (regardless of where they live), they won’t be traveling with the rush.
Has a LessWrong Event too
(TIL LessWrong Events are a thing)