That was 13 years ago across an ocean of accelerating cultural change, institutional trust, and people maturing. I’m sure you can still find plenty of people who would use mechanisms like that, but I’m pretty sure it’s going to be one of the less important considerations now.
taygetea
Several months late, but that Mission Impossible movie had real world effects, because Joe Biden watched it. https://www.the-independent.com/news/world/americas/us-politics/joe-biden-ai-mission-impossible-b2440365.html
Something very similar happened to Reagan with a depiction of nuclear war.
A point I think others missed here is that in the TV example, there’s more data than the situations the OP talks about, so mscottveach can say there’s a disparity instead of just having the hatemail. Maybe more situations should involve anonymous polling.
Crossbow is closer to Mars than pen
If you treat war and conflict as directed intentionality along the lines of the Book of Five Rings, then this is something akin to a call to taking actions in the world rather than spilling lots of words on the internet.
I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is “does this behavior seem normal or like a predictive red flag?”. In those cases, your lived experience directly influences your perception. Someone’s actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that’s evidence. The people who didn’t think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It’s not just people who don’t alieve the correct base rates. Certainly those people exist, though they’re much more plentiful on Tumblr than in person or on LW. It’s very non-obvious whether a strong reaction is correct.
Neither side can truly accept the other’s arguments. It’s a bad situation when both sides consider the other’s reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you’re particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you’d end up a frog boiled in a pot of drama nobody else is experiencing. I’m not sure this is what’s happening, but it looks plausible.
This post puts me maybe 50% the way to thinking this is a good idea from my previous position.
My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying “Taking care of yourself always comes first, respect yourself”, then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they’ll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they’ll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what “running on spite” cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.
My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you’ll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you’re updating not just the charter, but your aesthetics. Frankly, you don’t seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.
To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can’t be treated like normal policy positions, and people’s self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.
-Olivia
Would you expect to be able to achieve that—maybe eventually—within the world described?
Definitely. I expect the mindspace part to actually be pretty simple. We can do it in uncontrolled ways right now with dreams and drugs. I guess I kind of meant something like those, only internally consistent and persistent and comprehensible. The part about caring about base reality is the kind of vague, weak preference that I’d probably be willing to temporarily trade away. Toss me somewhere in the physical universe and lock away the memory that someone’s keeping an eye on me. That preference may be more load-bearing than I currently understand though, and there may be more preferences like it. I’m sure the Powers could figure it out though.
It’s partially that, and partially indicative of the prudence in the approach.
Perfectly understandable. I’d hope for exploration of outer reaches of mindspace in a longer-form version though.
This was great. I appreciate that it exists, and I want more stories like it to exist.
As a model for what I’d actually want myself, the world felt kind of unsatisfying, though the bar I’m holding it to is exceptionally high—total coverage of my utility-satisfaction-fun-variety function. I think I care about doing things in base reality without help or subconscious knowledge of safety. Also, I see a clinging to human mindspace even when unnecessary. Mainly an adherence to certain basic metaphors of living in a physical reality. Things like space and direction and talking and sound and light and places. It seems kind of quaintly skeuomorphic. I realize that it’s hard to write outside those metaphors though.
This seems very related to Brienne’s recent article.
For context, calling her out specifically is extremely rare, people try to be very diplomatic, and there is definitely a major communcation failure Elo is trying to address.
Replied above. There’s a strong chilling effect on bringing up that you don’t want children at events.
It was not an exaggeration.
From what I’ve seen, it’s not rare at all. I count… myself and at least 7 other people who’ve expressed the sentiment in private across both this year and last year (it happened last year too). It is, however, something that is very difficult for people to speak up about. I think what’s going on is that different people care about differing portious of the solstice (community, message, aesthetics, etc) to surprisingly differing degrees, may have sensory sensitivites or difficulty with multiple audio input streams, and may or may not find children positive to be around in principle. I think this community has far more people for whom noisy children destroy the experience than the base rate of other communities.
From what I’ve observed, the degree to which children ruin events for certain people is almost completely lost on many others. It’s difficult to speak up largely because of sentiments like yours, which make it feel like people will think that I’m going against the idea of the community. For me, and I don’t think I’m exceptionally sensitive, I think it removes between a third and half of the value of going to the event.
Ah, I spoke imprecisely. I meant what you said, as opposed to things of the form “there’s something in the water”.
I think you have the causality flipped around. Jonah is suggesting that something about Berkeley contributes to the prevalence of low conscientiousness among rationalists.
Nicotine use and smoking are not at all the same thing. Did you read the link?
To get a better idea of your model of what you expect the new focus to do, here’s a hypothetical. Say we have a rationality-qua-rationality CFAR (CFAR-1) and an AI-Safety CFAR (CFAR-2). Each starts with the same team, works independently of each other, and they can’t share work. Two years later, we ask them to write a curriculum for the other organization, to the best of their abilities. This is along the lines of having them do an Ideological Turing Test on each other. How well do they match? In addition, is the newly written version better in any case? Is CFAR-1′s CFAR-2 curriculum better than CFAR-2′s CFAR-2 curriculum?
I’m treating curriculum quality as a proxy for research progress, and somewhat ignoring things like funding and operations quality. The question is only meant to address worries of research slowdowns.
I logged in just to downvote this.
I could very well be in the grip of the same problem (and I’d think the same if I was), but it looks like CFAR’s methods are antifragile to this sort of failure. Especially considering the metaethical generality and well-executed distancing from LW in CFAR’s content.
and while I’m here, i also curate something like this. ben krasnow is only the best entry point into a wider world. This list was my best attempt recently, it was particularly aimed at getting programmers into physical engineering topics, trying to removing learned helplessness around it and making the topic feel like something it’s possible to engage with. https://gist.github.com/taygetea/1fcc9817618b1008a812e6f2c58ca987