What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
atucker
“Strong LW diaspora writers” is a small enough group that it should be straightforward to ask them what they think about all of this.
Yes. This meetup is at the citadel.
My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang—that there’s a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.
The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can’t share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is ‘doom,’ each player has an incentive to change the game.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner’s dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it’s unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it’s easy to imagine that you’re hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people—i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don’t know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example—if they don’t know game theory and you’re saying that game theory says you’re right, and evaluating arguments is costly and noisy, and they don’t trust you at the start of the interaction, it’s reasonable to distrust you even after the explanation, and not switch games.
I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to ‘spend karma’ on some goal or another. It seems that mass downvoting doesn’t really fit the goal of filtering content—it just lets you know that someone is either trolling LW in general, or just really doesn’t like someone in a way that they aren’t articulating in a PM or response to a comment/article.
That just means that the sanity waterline isn’t high enough that casinos have no customers—it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.
I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.
Hobbes uses a similar argument in Leviathan—people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn’t threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don’t do so at a particularly high rate.
Also, it’s easier to move specific people to a country than it is to raise the standard of living of entire countries. If you’re doing raising-living-standards as an x-risk strategy, are you sure you shouldn’t be spending money on locating people interested in x-risk instead?
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people’s values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn’t suffer from the “we’re figuring out what other people value” problem as much as other things, but I also think that that’s almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
It seems that “donate to a guide dog charity” and “buy me a guide dog” are pretty different w/r/t the extent that it’s motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.
It seems easier to evaluate “is trying to be relevant” than “has XYZ important long-term consequence”. For instance, investing in asteroid detection may not be the most important long-term thing, but it’s at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn’t matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.
Insofar as current society isn’t involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
(Not that I care particularly much about asteroids, but it’s a particularly easy example to think about.)
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.
My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn’t get any buy-in from the ruling class, and so they wouldn’t exist.
I think there’s also a pretty large difference in training—becoming an officer isn’t just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.
You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative “everything”. If your classifier triggers on everything, it tells you less on average about any given thing.
My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:
Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.
There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.
I basically completely ignored signalling while in high school, and often prioritized taking more interesting non-AP classes over AP classes, and focused on a couple of extracirricular relationships rather than diversifying and taking many. My grades and standardized test scores also suffered as a result of my investment in my robotics team.
All I can say is that I don’t understand why intelligence is relevant for whether you care about suffering.
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.
As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge—the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.
Basically, it seems like alleviating one human’s suffering has more potential to help the far future than alleviating one animal’s suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.
So my opinion winds up being something like “We should help the animals, but not now, or even soon, because other issues are more important and more pressing”.
Political instrumental rationality would be about figuring out and taking the political actions that would cause particular goals to happen. Most of this turns out to be telling people compelling things that you know that they don’t happen to, and convincing different groups that their interests align (or can align in a particular interest) when it’s not obvious that they do.
Political actions are based on appeals to identity, group membership, group bounding, group interests, individual interests, and different political ideas in order to get people to shift allegiances and take action toward a particular goal.
For any given individual, the relative importance of these factors will vary. For questions of identity and affiliation, they will weigh those factors based on meaning being reinforced, and memory-related stuff (i.e. clear memories of meaningful experiences count, but so do not-particularly meaningful but happens every day stuff). For actual action, it will be based on various psychological factors, as well as simply options being available and salient while they have the opportunity to act in a way that reinforces their affiliations/meaning/standing with others in the group/personal interests.
As a result, political instrumental rationality is going to be incredibly contingent on local circumstances—who talks to who, who believes what how strongly, who’s reliable, who controls what, who wants what, who hears about what, etc.
A more object level example takes place in The Wire, when a pastor is setting up various public service programs in an area where drug dealing is effectively legalized.
The pastor himself is able to appeal to his community on the basis of religious solidarity in order to get money, and so he can fund some stuff. He cares about public health and the fate of the now unemployed would-be drug runners who are no longer necessary for drug dealing because of Christian reasons (since drugs are legal, the gang members don’t bother with various steps that ensure that none of them can be photographed handing someone drugs for money—the dealer gets the money then the runner (typically a child) goes to the stash to give the buyer drugs). Further, he knows people from various community/political events in Baltimore.
So far, so good. He controls some resources (money), has a goal (public health, child development), and knows some people.
One of the first people he talks to is a doctor who has been trying to do STD prevention for a while, but hasn’t had the funding or organizational capacity to do much of anything. The pastor points out to him that there are a lot of at-risk people who are now concentrated in a particular location so that the logistics of getting services to people is much simpler. In this case, the pastor simply had information (through his connections) that the doctor didn’t, and got the doctor to cooperate by pointing out the opportunity to do something that the doctor had wanted.
He gets the support of the police district chief who decided to selectively enforce drug laws by appealing to the police chief’s desire for improving the district under his command (he was initially trying to shift drug trafficking away from more populated areas, and decrease violence by decreasing competition over territory), and it more or less worked.
That being said, I have more or less no idea what kinds of large-scale political action ought to be possible/is desirable.
I totally have the intuition though that step one of any plan is to become personally acquainted with people who have some sort of influence over the areas that you’re interested in, or to build influence by getting people who have some control over what you’re interested in to pay more attention to you. Borderline, if you can’t name names, and can’t point at groups of people involved in the action, then you can’t do anything particularly useful politically.
I think that crux is doing a lot of work in that it forces the conversation to be about something more specific than the main topic, and because it makes it harder to move the goal posts partway through the conversation. If you’re not talking about a crux then you can write off a consideration as “not really the main thing” after talking about it.