This post doesn’t show up under “NEW”, nor does it show up under “Recent Posts”.
ADDED: Never mind. I forgot I had “disliked” it, and had “do not show an article once I’ve disliked it” set.
(I disliked it because I find it kind of shocking that Ben, who’s very smart, and whom I’m pretty sure has read the things that I would refer him to on the subject, would say that the Scary Idea hasn’t been laid out sufficiently. Maybe some people need every detail spelled out for them, but Ben isn’t one of them. Also, he is committing the elementary error of not considering expected value.
ADDED: Now that I’ve read Ben’s entire post, I upvoted rather than downvoted this post. Ben was not committing the error of not considering expected value, so much as responding to many SIAI-influenced people who are not considering expected value. And I agree with most of what Ben says. I would add that Eliezer’s plan to construct something that will provably follow some course of action—any course of action—chosen by hairless primates, is likely to be worse in the long run than a hard-takeoff AI that kills all humans almost immediately. Explaining what I mean by “worse” is problematic; but no more problematic than explaining why I should care about propagating human values.)
I also disagree about what the Scary Idea is—to me, the idea that the AI will choose to keep humans around for all eternity, is scarier than that it will not. But that is something Eliezer either disagrees with, or has deliberately made obscure.)
to me, the idea that the AI will choose to keep humans around for all eternity, is scarier than that it will not. But that is something Eliezer either disagrees with, or has deliberately made obscure.
Wouldn’t it make sense to keep some humans around for all eternity—in the history simul-books? That seems to make sense—and not be especially scary.
Sure. Tiling the universe largely with humans is the strong scary idea. Locking in human values for the rest of the universe is the weak scary idea. Unless the first doesn’t imply the second; in which case I don’t know which is more scary.
This post doesn’t show up under “NEW”, nor does it show up under “Recent Posts”.
ADDED: Never mind. I forgot I had “disliked” it, and had “do not show an article once I’ve disliked it” set.
(I disliked it because I find it kind of shocking that Ben, who’s very smart, and whom I’m pretty sure has read the things that I would refer him to on the subject, would say that the Scary Idea hasn’t been laid out sufficiently. Maybe some people need every detail spelled out for them, but Ben isn’t one of them. Also, he is committing the elementary error of not considering expected value.
ADDED: Now that I’ve read Ben’s entire post, I upvoted rather than downvoted this post. Ben was not committing the error of not considering expected value, so much as responding to many SIAI-influenced people who are not considering expected value. And I agree with most of what Ben says. I would add that Eliezer’s plan to construct something that will provably follow some course of action—any course of action—chosen by hairless primates, is likely to be worse in the long run than a hard-takeoff AI that kills all humans almost immediately. Explaining what I mean by “worse” is problematic; but no more problematic than explaining why I should care about propagating human values.)
I also disagree about what the Scary Idea is—to me, the idea that the AI will choose to keep humans around for all eternity, is scarier than that it will not. But that is something Eliezer either disagrees with, or has deliberately made obscure.)
Wouldn’t it make sense to keep some humans around for all eternity—in the history simul-books? That seems to make sense—and not be especially scary.
Sure. Tiling the universe largely with humans is the strong scary idea. Locking in human values for the rest of the universe is the weak scary idea. Unless the first doesn’t imply the second; in which case I don’t know which is more scary.
It does now for me. Strange.
Oops. My mistake. It’s a setting I had that I forgot about.
It doesn’t?
It’s off the front page of NEW/Recent Posts, as there have been more than ten other posts since it was posted, but it’s still there.
Nope, it’s not there at all.
Recent Posts
Rationality Quotes: November 2010 by jaimeastorga2000 | 3
Oxford (UK) Rationality & AI Risks Discussion Group by Larks | 3
Harry Potter and the Methods of Rationality discussion thread, part 5 by NihilCredo | 5
South/Eastern Europe Meeting in Ljubljana/Slovenia by Thomas | 7
Hierarchies are inherently morally bankrupt by PhilGoetz | 0
Group selection update by PhilGoetz | 21
What I would like the SIAI to publish by XiXiDu | 23
Berkeley LW Meet-up Saturday November 6 by LucasSloan | 4
Is cryonics evil because it’s cold? by ata | 19
Imagine a world where minds run on physics by cousin_it | 10
Qualia Soup, a rationalist and a skilled You Tube jockey by Raw_Power | 6
Value Deathism by Vladimir_Nesov | 21
Cambridge Meetups Nov 7 and Nov 21 by jimrandomh | 4
Making your explicit reasoning trustworthy by AnnaSalamon | 60
Call for Volunteers: Rationalists with Non-Traditional Skills by Jasen | 20
Self-empathy as a source of “willpower” by Academian | 39
If you don’t know the name of the game, just tell me what I mean to you by Stuart_Armstrong | 7
Luminosity (Twilight fanfic) Part 2 Discussion Thread by JenniferRM | 4
Activation Costs by lionhearted | 24
Dealing with the high quantity of scientific error in medicine by NancyLebovitz | 27
Let’s split the cake, lengthwise, upwise and slantwise by Stuart_Armstrong | 34
Willpower: not a limited resource? by Jess_Riedel | 21
Optimism versus cryonics by lsparrish | 34
The Problem With Trolley Problems by lionhearted | 9
How are critical thinking skills acquired? Five perspectives by matt | 7
October 2010 Southern California Meetup by jimmy | 6
Vipassana Meditation: Developing Meta-Feeling Skills by Luke_Grecki | 18
Mixed strategy Nash equilibrium by Meni_Rosenfeld | 38
Human performance, psychometry, and baseball statistics by Craig_Heldreth | 22
Melbourne Less Wrong Meetup for November by Patrick | 8
Swords and Armor: A Game Theory Thought Experiment by nick012000 | 13
Morality and relativistic vertigo by Academian | 33
The Dark Arts—Preamble by Aurini | 30
Love and Rationality: Less Wrongers on OKCupid by Relsqui | 11
Collecting and hoarding crap, useless information by lionhearted | 15
References & Resources for LessWrong by XiXiDu | 48
Strategies for dealing with emotional nihilism by SarahC | 22
Recommended Reading for Friendly AI Research by Vladimir_Nesov | 17
Notion of Preference in Ambient Control by Vladimir_Nesov | 11
Harry Potter and the Methods of Rationality discussion thread, part 4 by gjm | 2
Rationality quotes: October 2010 by Morendil | 3
Understanding vipassana meditation by Luke_Grecki | 41
Berkeley LW Meet-up Saturday October 9 by LucasSloan | 5
Weird. It’s there for me.
Qualia Soup, a rationalist and a skilled You Tube jockey by Raw_Power | 6
Value Deathism by Vladimir_Nesov | 21
Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by ciphergoth | 24
Cambridge Meetups Nov 7 and Nov 21 by jimrandomh | 4
Making your explicit reasoning trustworthy by AnnaSalamon | 60