This is addressed in the FAQ linked at the top of the page. TL;DR: The author insists that the gist of the story is true, but acknowledges that he glossed over a lot of intermediate debugging steps, including accounting for the return time.
Taymon Beal
Does that logic apply to crawlers that don’t try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.
I didn’t downvote (I’m just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:
What methodology do you think MIRI used to ascertain that the Time piece was impactful, and why do you think that methodology isn’t vulnerable to bots or other kinds of attacks?
Why would social media platforms go to the trouble of feeding fake data to bots instead of just blocking them? What would they hope to gain thereby?
What does any of this have to do with the Social Science One incident?
In general, what’s your threat model? How are the intelligence agencies involved? What are they trying to do?
Who are you even arguing with? Is there a particular group of EAsphere people who you think are doing public opinion research in a way that doesn’t make sense?
Also, I think a lot of us don’t take claims like “I’ve been researching this matter professionally for years” seriously because they’re too vaguely worded; you might want to be a bit more specific about what kind of work you’ve done.
For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9
I assume this is referring to the ancient fable “The Ant and the Grasshopper”, which is about what we would today call time preference. In the original, the high-time-preference grasshopper starves because it didn’t spend the summer stockpiling food for winter, while the low-time-preference ant survives because it did. Of course, alternate interpretations have been common since then.
Boston
Saturday, December 17; doors open at 6:30, Solstice starts at 7:15
69 Morrison Ave., Somerville, MA 02144RSVPs appreciated for planning purposes: https://www.facebook.com/events/3403227779922411
Let us know in advance if you need to park onsite (it’s accessible by public transportation). We’re up a flight of stairs.
As someone who was very unhappy with last year’s implementation and said so (though not in the public thread), I think this is an improvement and I’m happy to see it. In previous years, I didn’t get a code, but if I’d had one I would have very seriously considered using it; this year, I see no reason to do that.
I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and some other person were to say “whoever pushed the button destroyed a bunch of real value” then I wouldn’t necessarily quibble with that, but if the LW team said the same thing then I’d be annoyed.
So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they’re apparently allowed, so:
First, it’s not a great situation if there are like three rationalist holidays and one of them is this dangerous/unhealthy for a substantial fraction of people (e.g., eating disorders, which appear to exist at a high rate in the ratsphere). As far as I can tell, nobody intended that outcome; the original Vavilov Day proposal was like 90% “individual thing to do for personal reasons”, 10% “new rationalist holiday”, and then commenters here and on social media seized on the 10% because we currently don’t have enough rationalist holidays and people are desperate for more. (This is why, e.g., the original suggestion that people propose alternative ways of honoring Vavilov didn’t get any traction; that wouldn’t have met the pent-up demand for more ritual as effectively, so there wasn’t interest.) But it meant that the choice was between “do something that’s maybe not at all a good idea for you” and “lose access to communal affirmation of shared values with no available substitute”. The idea here isn’t that there shouldn’t be anything this risky; it’s that something this risky should be one thing among many, and right now we aren’t there.
The counterpoint is that if we hold every new idea to a “good for the overall shape of the community” standard then defending ideas from critics becomes too unrewarding and we don’t get any new ideas at all. Bulldozer vs. vetocracy, except mediated by informal community attitudes rather than by any authority. This seems like a valid point to me and I don’t have any particularly helpful thoughts about how to navigate this tradeoff.
(It might have been possible to mitigate the tradeoff—assuming we wanted something like Vavilov Day to be a rationalist holiday at all, rather than an individual thing, which maybe we didn’t—by putting more overt focus on questions like “how should people decide whether this is good for them” and “how should people whom this isn’t good for relate to it”. But while these seem pretty non-costly to me, it might be the case that other people have different ideas for what non-costly precautions should be taken, and if you try to take all of them then it’s not non-costly anymore. Again, I don’t know.)
Second, I’ve heard from multiple sources that some people had concerns about the event but felt that they couldn’t express them in public. (You should take this claim with a grain of salt; not all of my knowledge here is firsthand, and even with respect to what is, since I’m not providing any details, you can’t trust that I haven’t omitted context that would lead you to a different conclusion if you knew it.) The resulting appearance of unanimity definitely left me feeling pretty unnerved and made it hard to tell whether I should participate. There are obvious reasons for people to refrain from public criticism—to the extent that it’s a personal thing, maybe we shouldn’t criticize people’s life choices, and to the extent that it’s a community thing, maybe we should err on the side of non-criticism in order to prevent chilling effects—and I don’t really have any useful thoughts about what to think or do about this. I’m not sure anyone should particularly do anything differently based on this information. But I’d feel remiss if I allowed it to just not exist in public at all.
(This wound up being mostly about the meta-level ritual/holiday stuff, but I’m posting it in this thread rather than the other one because I wanted to say something about the application of that meta-level stuff to this particular situation, rather than about how to build rationalist ritual/holidays in full generality. I’m basically in favor of the things being suggested in the other thread; my only serious worry is that nobody will actually do them, given that many of them have been suggested before.)
This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as “friendly”.
Correction: The annual Petrov Day celebration in Boston has never used the button.
I’ve talked to some people who locked down pretty hard pretty early; I’m not confident in my understanding but this is what I currently believe.
I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.
I don’t think our community is “hyper-altruistic” in the Strangers Drowning sense, but we do put a lot of emphasis on being the kinds of people who are smart enough not to pick up pennies in front of steamrollers, and on not trusting the pronouncements of officials who aren’t incentivized to do sane cost-benefit analyses. And we apply that to altruism as much as anything else. So when a few people started coordinating an organized response, and used a mixture of self-preservation-y and moralize-y language to try to motivate people out of their secure-civilization-induced complacency, the community listened.
This doesn’t explain why not everyone eased up on restrictions once the epistemic Wild West of February and March gave way to the new normal later in the year. That seems more like a genuine failure on our part. I think I prefer Raemon’s explanation from this subthread: the concentrated attention that was required to make the initial response work turned out to be a limited resource, and it had been exhausted. By the time it replenished, there was no longer a Schelling event to coordinate around, and the problems no longer seemed so urgent to the people doing the coordinating.
Docker is not a security boundary.
Eh, if you read the raw results most are pretty innocuous.
Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.
I feel obligated to link to my house’s Petrov Day “Bad/X-risk Future” candle.
Cross-posting from Facebook:
Any policy goal that is obviously part of BLM’s platform, or that you can convince me is, counts. Police reform is the obvious one but I’m open to other possibilities.
It’s fine for “heretics” to make suggestions, at least here on LW where they’re somewhat less likely to attract unwanted attention. Efficacy is the thing I’m interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.
Small/limited returns are okay if they’re the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.
Suggestions from non-Americans are fine.
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.
I don’t have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community’s trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization’s core competencies. I’ve reached the point where I no longer find even gross failures of this kind surprising.
(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)
The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1
You don’t think the GitHub thing is about reducing server load? That would be my guess.