Today, I was using someone else’s computer and typed “lesswrong” into the search/address bar. Apparently the next most popular search is “lesswrong cult”. I started shrieking with laughter, getting a concerned reaction from the owner, which doesn’t help our image much.
Calien
Meetup : LessWrong Australia online hangout
Evan—I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin—What would be an example of a better alternative?
Proponents of both have the same attitude of “this is a thing that people ocassionally give lip service to, that we’re going to follow to a more logical conclusion and actually act on”.
Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I’m assuming that you take it somewhat figuratively, e.g. if you have family in another country you’re still invested in what happens to them.
Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?
If everyone did that, there’s a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it’s right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.
(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn’t be too fazed by their own future deaths, I’d be fine with bringing them into the world.)
Data point: Assuming there are any gendered pronouns in the examples, I find it weirder when the same one is used consistently for the entire article.
Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven’t actually observed it.)
This reminds me of a CBT technique for reducing anxiety: when you’re worried about what will happen in some situation, make a prediction, and then test it.
In-group fuzzes acquired, for science!
I’ve also used the “think of yourself as multiple agents” trick at least since my first read of HPMOR, and noticed some parallels. In stressful situations it takes the form of rational!Calien telling me what to do, and I identify with her and know she’s probably right so I go along with it. Although if I’m under too much pressure I end up paralysed as Brienne describes, and there may be hidden negative consequences as usual.
Also two redundant sentences:
I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we’re onto implementing our plans.
The aim of these techniques is to limit the influence of motivators have when we are deciding which actions to take, even if we allow or welcome then once we’re onto implementing our plans.
Hi, I’m another former lurker. I will be there!
Hi LW. I’m a longtime lurker and a first-year student at ANU, studying physics and mathematics. I arrived at Less Wrong three years ago through what seems to be one of the more common routes: being a nerd (math, science, SF, reputation as weird, etc.), having fellow nerds (from a tiny US-based forum) recommend HPMOR, and following EY’s link to Less Wrong.
You’d have to want to signal very strongly to overcome the inconvenience of doing the paperwork and forking over cold hard cash. Self-signalling seems to be a plausible motivation, but I’m not sure how much benefit you’d get from being able to tell other people about it. Not to mention the opposite pressure that most people have because they have to convince their close family members to respect their wishes.