I somewhat agree with this but I think its an uncharitable framing of the point, since virtue signalling is generally used for insincerity. My impression is that the vegans I’ve spoken with are mostly acting sincerely based on their moral premises, but those are not ones I share. If you sincerely believe that a vast atrocity is taking place that society is ignoring then a strident emotional reaction is understandable.
FiftyTwo
I’ve definitely noticed a shift in the time I’ve been involved or aware of EA. In the early 2010s it was mostly focused on global poverty and the general idea of evidence based charity, and veganism was peripheral. Now it seems like a lot of groups are mainly about veganism, and very resistant to people who think otherwise. And as veganism is a minority position that is going to put off people who would otherwise be interested in EA
You still run into the alignment problem of ensuring that the upgraded version of you aligns with your values, or some extension of them. If my uploaded transhuman self decides to turn the world into paperclips that’s just as bad as if a non-human AGI does.
Never really got anywhere. Its long enough ago that I don’t really remember why, but think I generally found it unengaging. Have periodically tried to teach myself programming through different methods since then but none have stuck. This probably speaks to the difficulty of learning new skills when you have limited time/energy resources, and no specific motivation, more than anything else. (Have had similar difficulties with language learning, but got past them due to short term practical benefits, and devoting specific time to the task).
It mixes the personal and professional level
Possibly reflective of a wider issue in EA/rationalist spaces where the two are often not very clearly delineated. In that sense EA is more like hobby/fandom communities than professional ones.
Saying that people would be better off taking more risks under a particular model elides the question of why they don’t take those risks to begin with, and how we can change that. If its desirable to do so.
The psychological impact of a loss of x is generally higher than that of a corresponding gain. So if I know I will feel worse from losing $10 than I will feel good from gaining $100, then its entirely rational in my utility function to not take a 50⁄50 bet between those two outcomes. Maybe I would be better off overall if I didn’t over weight losses, but utility functions aren’t easily rewritable by humans. The closest you could come is some kind of exposure therapy to losses.
Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.
A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do).
I think the core thing here is same-sidedness.
The converse of this is that the maximally charitable approach can be harmful when the interlocutor is fundamentally not on the same side as you, in trying to honestly discuss a topic and arrive at truth. I’ve seen people tie themselves in knots when trying to apply the principle of charity, when the most parsimonious explanation is that the other side is not engaging in good faith, and shouldn’t be treated as such.
It’s taken me a long time to internalise this, because my instinct is to take what people say at face value. But its important to remember that sometimes there isn’t anything complex or nuanced going on, people can just lie.
Thanks. This is the kind of content I originally came to LW for a decade ago, but seems to have become less popular
You might find Origins Of Political Order interesting. Emphasis on how the principle agent problem is one of the central issues of governance and how without strong mechanisms systems tend to descend into corruption
Is there any way of reverse engineering from these pictures what existing images were used to generate them? Would be interesting to see how much similarity there is.
So we just need to get two superpowers who currently feel they are in a zero sum competition with each other to stop trying to advance in an area that gives them a potentially infinite advantage? Seems a very classic case of the kind of coordination problems that are difficult to solve, with high rewards for defecting.
We have, partially managed to do this for nuclear and biological weapons. But only with a massive oversight infrastructure that doesn’t exist for AI. And relying on physical evidence and materials control that doesn’t exist for AI. It’s not impossible, but it would require a similar level of concerted international effort that was used for nuclear weapons. Which took a long time, so possibly doesn’t fit with your short timeline
A more charitable interpretation of the same evidence would be that as a public health professional Dr Fauci has a lot of experience with the difficulties of communicating complex and messages and the political tradeoffs that are necessary for effective action. And has judged based on that experience what is most effective to say. Do you have data he doesn’t? Or a reason to think his experience in his speciality is inapplicable?
Earth does have the global infrastructure
It does? What do you mean? The only thing I can think of is the UN, and recent events don’t make it very likely they’d engage in coordinated action on anything.
It should not take long, given these pieces and a moderate amount of iteration, to create an agentic system capable of long-term decision-making
That is, to put it mildly, a pretty strong claim, and one I don’t think the rest of your post really justifies. Without which it’s still just listing a theoretical thing to worry about
If you’re an otherwise healthy young person who has tested positive, what’s the best thing you can do to prevent getting long covid? Seems like there’s research saying that exercising too early can make it worse, but other articles saying good things about exercise, and I’m not sure how to evaluate it.
My greatest legacy
Good programmers who are a pain to work with are much less successful than average programmers who are pleasant to work with. Increasing technical competency has diminishing returns. So I’d focus on doing things that gets you more experience of working with people, the business development internship may do that depending on the details. Also things like working in a bar or restaurant.
Note that this is distinct from the standard advice on developing social skills. Being good at talking to strangers and going to parties is good. But working well with people in an employment context is different, its much more about maintaining working relationships with people you may not especially like, than forming deep connections.
I’d be curious what you think now after many years to see the effects of things in practice
I feel like you’re conflating two different levels, the discourse in wider global society and within a specific community.
I doubt you’d find anyone here who would disagree that actions by big companies that obscure the truth are bad. But they’re not the ones arguing on these forums or reading this post. Vegans have a significant presence in EA spaces so should be contributing to those productively and promoting good epistemic norms. What the lobbying team of Big Meat Co. does has no impact on that.
Also in general I’m leery of any argument of the form “the other side does as bad or worse so its okay for us to do so” given history.