Really? What if the thing you protect is “all sentient beings,” and that happens to be the same as the thing the person who introduced it to you or a celebrity protects?
You, personally, probably don’t care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about “all sentient beings,” and I know of 0 that exist.
I find it very convenient that most of Less Wrong has the same “thing-to-protect” as EY/SigInst, for the following reasons:
Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.
Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard—most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)
Taken in concert with this quote from the original article:
Similarly, in Western real life, unhappy people are told that they need a “purpose in life”, so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes. You should be careful not to pick something too expensive, though.
...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.
You, personally, probably don’t care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about “all sentient beings,” and I know of 0 that exist.
I care about other things, yes, but I do care quite a bit about all sentient beings as well (though not really on the level of “something to protect”, I’ll admit). And I have cared about them before I even heard of Eliezer Yudkowsky. In fact, when I first encountered EY’s writing, I figured he did not care about all sentient beings, that he in fact cared about all sapient beings, and was misusing the word like they usually do in science fiction, rather than holding some weird theory of what consciousness is that I haven’t heard of anyone else respectable holding, that the majority of neuroscientists disagree with, and that unlike tons of other contrarian positions he holds, he doesn’t argue for publicly (I think there might have been one facebook post with an argument about it he made, but I can’t find it now).
Something I neglected in the phrase “all sentient beings” is that I care less about “bad” sentient beings, or sentient beings who deliberately do bad things than “good” sentient beings. But even for that classic example of evil, Adolf Hitler, if he were alive, I’d rather that he be somehow reformed than killed.
I find it very convenient that most of Less Wrong has the same “thing-to-protect” as EY/SigInst, for the following reasons:
Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.
I may not be able to do FAI research, but I can do what I’m actually doing, which is donating a significant fraction of my income to people who can. (slightly more than 10% of adjusted gross income last tax year, and I’m still a student, so as they say, “This isn’t even my final form”).
Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard—most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)
What I’ve really taken from the person who taught me the concept of a thing-to-protect, is a means-to-protect. If I hadn’t been convinced that FAI was a good plan for achieving my values, I would be pursuing lesser plans to achieve my values. I almost started earning to give to charities spreading vegetarianism/veganism instead of MIRI. And I have thought pretty hard about whether this is a good means-to-protect.
Also, though I may not be “thing-to-protect”-level altruistic yet, I’m working on it. I’m more altruistic than I was a few years ago.
This isn’t even my final form.
...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.
I may not be able to do FAI research, but I can do what I’m actually doing, which is donating a significant fraction of my income to people who can.
But why do you care about that? It’s grossly improbable that out of the vast space of things-to-protect, you were dispotitioned to care about that thing before hearing of the concept. So you’re probably just shopping for a cause in exactly the way EY advises against.
To put it another way… for the vast majorty of humans, their real thing-to-protect is probably their children, or their lover, or their closest friend. The fact that this is overwhelmingly underrepresented on LW indicates something funny is going on.
You, personally, probably don’t care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about “all sentient beings,” and I know of 0 that exist.
I find it very convenient that most of Less Wrong has the same “thing-to-protect” as EY/SigInst, for the following reasons:
Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.
Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard—most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)
Taken in concert with this quote from the original article:
...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.
I care about other things, yes, but I do care quite a bit about all sentient beings as well (though not really on the level of “something to protect”, I’ll admit). And I have cared about them before I even heard of Eliezer Yudkowsky. In fact, when I first encountered EY’s writing, I figured he did not care about all sentient beings, that he in fact cared about all sapient beings, and was misusing the word like they usually do in science fiction, rather than holding some weird theory of what consciousness is that I haven’t heard of anyone else respectable holding, that the majority of neuroscientists disagree with, and that unlike tons of other contrarian positions he holds, he doesn’t argue for publicly (I think there might have been one facebook post with an argument about it he made, but I can’t find it now).
Something I neglected in the phrase “all sentient beings” is that I care less about “bad” sentient beings, or sentient beings who deliberately do bad things than “good” sentient beings. But even for that classic example of evil, Adolf Hitler, if he were alive, I’d rather that he be somehow reformed than killed.
I may not be able to do FAI research, but I can do what I’m actually doing, which is donating a significant fraction of my income to people who can. (slightly more than 10% of adjusted gross income last tax year, and I’m still a student, so as they say, “This isn’t even my final form”).
What I’ve really taken from the person who taught me the concept of a thing-to-protect, is a means-to-protect. If I hadn’t been convinced that FAI was a good plan for achieving my values, I would be pursuing lesser plans to achieve my values. I almost started earning to give to charities spreading vegetarianism/veganism instead of MIRI. And I have thought pretty hard about whether this is a good means-to-protect.
Also, though I may not be “thing-to-protect”-level altruistic yet, I’m working on it. I’m more altruistic than I was a few years ago.
This isn’t even my final form.
Examples?
I’m not going to read all of that. Responding to a series of increasingly long replies is a negative-sum game.
I’ll respond to a few choice parts though:
Most of this thread: http://lesswrong.com/r/discussion/lw/jyl/two_arguments_for_not_thinking_about_ethics_too/
But why do you care about that? It’s grossly improbable that out of the vast space of things-to-protect, you were dispotitioned to care about that thing before hearing of the concept. So you’re probably just shopping for a cause in exactly the way EY advises against.
To put it another way… for the vast majorty of humans, their real thing-to-protect is probably their children, or their lover, or their closest friend. The fact that this is overwhelmingly underrepresented on LW indicates something funny is going on.
The downside to not reading what I write is that when you write your own long reply, it’s an argument against a misunderstood version of my position.
I am done with you. Hasta nunca.