I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:
If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well … pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.… In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a “Meta-Golden Rule”, which might be paraphrased as “Treat your inferiors as you would be treated by your superiors.” It’s a wonderful, paradoxical idea (and most of my friends don’t believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)
I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet … we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:
He goes on to talk about intelligence amplification, and then:
Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety … well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today’s world, one where losers take on the winners’ tricks and are coopted into the winners’ enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].
As I wrote in another comment, Eliezer Yudkowsky hasn’t come up with anything unique. And there is no argument in saying that he’s simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don’t know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?
It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don’t, or are you people so sure of your phenomenal intelligence?
David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year’s Singularity Summit. He estimates the probability of human-level AI by 2100 at “somewhat more than one-half,” thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn’t answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?
If all this was supposed to be mere philosophy, I wouldn’t inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?
But the SIAI is asking for the better part of your income and resources.
If you are a hard-core consequentialist altruist who doesn’t balance against other less impartial desires you’ll wind up doing that eventually for something. Peter Singer’s “Famine, Affluence, and Morality” is decades old, and there’s still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you’re willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.
I’ll have more to say on substance tomorrow, but it’s getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
One might think that those who were inclined to give away their worldly goods to help the needy would have bred themselves out of the gene pool long ago—but evidently that is not the case.
Perhaps one can invoke the unusual modern environment. Maybe in the ancestral environment, helping others was more beneficial—since the high chance of repeated interactions made reciprocal altrusim work better. However, if people donate to help feed starving millions half way around the world, the underlying maths no longer adds up—resulting in what was previously an adaptive behaviour leading to failure in modern situations—maladaptive behaviour as a result of an unfamiliar environment.
One might expect good parents to work to keep their kids away from utilitarian cults—which feed off the material resources of their members—on the grounds that such organisations may systematically lead to a lack of grandchildren. “Interventions” may be required to extricate the entangled offspring from the feeding tentacles of these parasitic entities that exploit people’s cognitive biases for their own ends.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
This reads like an attack on utilitarian ethics, but there’s an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don’t help people?
“charity pyramid” … “good parents work to keep their kids away” … “utilitarian cults” … “feeding tentacles of these parasitic entities that exploit … for their own ends”
Wow, my propagandometer is pegged. Why did you choose this language? Isn’t exploiting people for their own ends incompatable with being utilitarian? Do you have any examples of charities structured like pyramid schemes, or as cults?
“Are you claiming that utilitarian ethics are wrong?”
“Right” and “wrong” are usually concepts that are applied with respect to an ethical system. Which ethical system am I expected to assume when trying to make sense of this questiion?
“Are you claiming that most charities are actually fraudulent and don’t help people?”
No—I was not talking about that.
“Isn’t exploiting people for their own ends incompatable with being utilitarian?”
If a charity’s goals include “famine relief”, then considerable means would be justified by that—within a utilitarian framework.
“Charity pyramids” was a loosely-chosen term. There may be some pyramid structure—but the image I wanted to convey was of a cause with leader(s) preaching the virtues of utilitarianism—being supported in their role by a “base” of “suckers”—individuals who are being duped into giving many of their resources to the cause.
Superficially, the situation represents a bit of a Darwinian puzzle: Are the “suckers” being manipulated? Have they been hypnotised? Do they benefit in some way by the affiliation? Are they fooled into treating the cause as part of their extended family? Are they simply broken? Do they aspire to displace the leader? Have their brains been hijacked by pathogenic memes? What is going on?
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
It helps that just pointing out observations like this is almost universally punished. Something to do with people on the top of pyramids having more power...
For my part I would upvote your comment another few times if I could but I note that someone else has downvoted you.
Another aspect of it is that people try and emulate charismatic leaders—in the hope of reproducing their success. If the guru says to give everything to the guru then the followers sometimes comply—because it is evident that the guru has things sussed—and is someone to be copied and emulated. Sometimes this strategy works—and it is possible for a cooperative follower to rise to power within the cult. However, if the gurus’ success is largely down to their skill at feeding off their followers, the gurus are often heavily outnumbered.
There’s simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.
Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.
Anyway, I think I might write some experts and all of the people mentioned in my post, if I’m not too lazy.
I’ve already got one reply, whom I’m not going to name right now. But let’s first consider Yudkowsky’ attitude of adressing other people:
You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong...
Now the first of those people I contacted about it:
There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.
Read Moral Machines for current state of the art thinking on how to build a moral machine mind.
SIAI dogma makes sense if you ignore the uncertainties at every step of their logic. It’s like assigning absolute numbers to every variable in the Drake equation and determining that aliens must be all around us in the solar system, and starting a church on the idea that we are being observed by spaceships hidden on the dark side of the moon. In other words, religious thinking wrapped up to look like rationality.
ETA
I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don’t seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I have some sympathy for your remark.
The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.)
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough.
I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.
Two points to make here:
(i) Though there’s huge uncertainty in judging these sorts of things and I’m by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I’ve written about this in various comments, for example here, here and here.
(ii) I’ve thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.
I agree that ‘poisoning the meme’ is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I’ll be interested to hear your analysis when it’s ready. [Edit: apparently the analysis was about asteroids, not reputation.]
Here’s the Fidelity Charitable Gift Fund for Americans. I’m skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.
read Moral Machines for current state of the art thinking on how to build a moral machine mind.
It’s hardly that. Moral Machines is basically a survey; it doesn’t go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.
And Eliezer is one of the people it mentions, so I’m not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)
To follow up on this, Wendell specifically mentions EY’s “friendly AI” in the intro to his new article in the Ethics and Information Technology special issue on “Robot ethics and human ethics”.
[...] many reasons to doubt [...] belief system of a cult [...] haphazard musings of a high school dropout [...] never written a single computer program [...] professes to be an expert [...] crying chicken little [...] only a handful take the FAI idea seriously.
[...] dogma [...] ignore the uncertainties at every step [...] starting a church [...] religious thinking wrapped up to look like rationality.
I am unable to take this criticism seriously. It’s just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?
Edit: And I’m downvoted. You actually think a reply that’s 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.
The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn’t tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it’s also consistent with having some damning arguments.
Of course, to know either way, we would have to hear this person’s actual arguments, which we haven’t, in this case.
How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview?
Just because a certain topic is raised, doesn’t mean that it is discussed correctly.
How is a fiercely atheist group religious at all?
The argument is that their thinking has some similarities to religion. It’s a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.
How is it a cult (there are lots of posts about this in the LessWrong archive)?
The fact that EY displays an awareness of cultish dynamics doesn’t necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer’s discussion that “every cause wants to become a cult,” and I don’t like the common practice of labeled movements as “cults.” The net for “cult” is being drawn far too widely.
Yet I wouldn’t say that the use of the word “cult” means that the individual is engaging in bad reasoning. While I think “cult” is generally a misnomer, it’s generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.
We would need to hear this individual’s actual arguments to be able to evaluate whether the polemical summary is well-founded.
P.S. I wasn’t the one who downvoted you.
Edit:
high school dropout, who has never written a single computer program
I don’t know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally (“Hello World” is a program). If Eliezer isn’t a high school dropout, and has written major applications, then the credibility of this writer is lowered.
I believe you weren’t supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI’s yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.
The END OF THE WORLD acts as a superstimulus to human fear mechanisms—and causes caring people rush to warn their friends of the impending DOOM—spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy—and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.
The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.
There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here—that the earth will be engulfed when the sun becomes a red giant.
That fate for the planet haunted me when I was a kid. People would say “But that’s billions of years in the future” and I’d feel as though they were missing the point. It’s possible that a more detailed discussion would have helped....
Recently, I’ve read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.
This seems less intellectually honest than “The human race will be long gone anyway”, but not awful. I think the most meticulous answer (aside from “that’s the far future and there’s nothing to be done about it now”) is “that’s so far in the future that we don’t know whether people will be around, but if they are, they may well find a solution.”
[1] I count this as evidence for the Flynn Effect.
Re: “haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI.”
This opinion sounds poorly researched—e.g.: “This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.”—http://yudkowsky.net/obsolete/plan.html
I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn’t worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.
I don’t think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can’t.
He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.
EDIT: This is not meant to be a defense of obvious wrong hyperbole like “has never written a single computer program”.
Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:
This isn’t contrary to Robin’s post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.
I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:
He goes on to talk about intelligence amplification, and then:
As I wrote in another comment, Eliezer Yudkowsky hasn’t come up with anything unique. And there is no argument in saying that he’s simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don’t know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?
It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don’t, or are you people so sure of your phenomenal intelligence?
David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year’s Singularity Summit. He estimates the probability of human-level AI by 2100 at “somewhat more than one-half,” thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn’t answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?
If all this was supposed to be mere philosophy, I wouldn’t inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?
If you are a hard-core consequentialist altruist who doesn’t balance against other less impartial desires you’ll wind up doing that eventually for something. Peter Singer’s “Famine, Affluence, and Morality” is decades old, and there’s still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you’re willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.
I’ll have more to say on substance tomorrow, but it’s getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
One might think that those who were inclined to give away their worldly goods to help the needy would have bred themselves out of the gene pool long ago—but evidently that is not the case.
Perhaps one can invoke the unusual modern environment. Maybe in the ancestral environment, helping others was more beneficial—since the high chance of repeated interactions made reciprocal altrusim work better. However, if people donate to help feed starving millions half way around the world, the underlying maths no longer adds up—resulting in what was previously an adaptive behaviour leading to failure in modern situations—maladaptive behaviour as a result of an unfamiliar environment.
One might expect good parents to work to keep their kids away from utilitarian cults—which feed off the material resources of their members—on the grounds that such organisations may systematically lead to a lack of grandchildren. “Interventions” may be required to extricate the entangled offspring from the feeding tentacles of these parasitic entities that exploit people’s cognitive biases for their own ends.
This reads like an attack on utilitarian ethics, but there’s an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don’t help people?
Wow, my propagandometer is pegged. Why did you choose this language? Isn’t exploiting people for their own ends incompatable with being utilitarian? Do you have any examples of charities structured like pyramid schemes, or as cults?
“Right” and “wrong” are usually concepts that are applied with respect to an ethical system. Which ethical system am I expected to assume when trying to make sense of this questiion?
No—I was not talking about that.
If a charity’s goals include “famine relief”, then considerable means would be justified by that—within a utilitarian framework.
“Charity pyramids” was a loosely-chosen term. There may be some pyramid structure—but the image I wanted to convey was of a cause with leader(s) preaching the virtues of utilitarianism—being supported in their role by a “base” of “suckers”—individuals who are being duped into giving many of their resources to the cause.
Superficially, the situation represents a bit of a Darwinian puzzle: Are the “suckers” being manipulated? Have they been hypnotised? Do they benefit in some way by the affiliation? Are they fooled into treating the cause as part of their extended family? Are they simply broken? Do they aspire to displace the leader? Have their brains been hijacked by pathogenic memes? What is going on?
It helps that just pointing out observations like this is almost universally punished. Something to do with people on the top of pyramids having more power...
For my part I would upvote your comment another few times if I could but I note that someone else has downvoted you.
Another aspect of it is that people try and emulate charismatic leaders—in the hope of reproducing their success. If the guru says to give everything to the guru then the followers sometimes comply—because it is evident that the guru has things sussed—and is someone to be copied and emulated. Sometimes this strategy works—and it is possible for a cooperative follower to rise to power within the cult. However, if the gurus’ success is largely down to their skill at feeding off their followers, the gurus are often heavily outnumbered.
http://www.overcomingbias.com/2007/02/what_evidence_i.html
Absence of evidence is not evidence of absence?
There’s simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.
Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.
Anyway, I think I might write some experts and all of the people mentioned in my post, if I’m not too lazy.
I’ve already got one reply, whom I’m not going to name right now. But let’s first consider Yudkowsky’ attitude of adressing other people:
Now the first of those people I contacted about it:
ETA
I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don’t seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you’re going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you’re just interested in maximizing expected utility, the complaint that we don’t have a lot of evidence about what will be best for the future, or the complaint that we just don’t really know whether SIAI’s mission and methodology are going to work seems to lose a lot of force.
I have some sympathy for your remark.
The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.
Have you read Nick Bostrom’s paper, Astronomical Waste? You don’t have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.
Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)
If you’re sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I’m not saying SIAI clearly wins, I just want to know what else you’re thinking about.)
I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).
Two points to make here:
(i) Though there’s huge uncertainty in judging these sorts of things and I’m by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I’ve written about this in various comments, for example here, here and here.
(ii) I’ve thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.
I agree that ‘poisoning the meme’ is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I’ll be interested to hear your analysis when it’s ready. [Edit: apparently the analysis was about asteroids, not reputation.]
Here’s the Fidelity Charitable Gift Fund for Americans. I’m skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.
It’s hardly that. Moral Machines is basically a survey; it doesn’t go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.
And Eliezer is one of the people it mentions, so I’m not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)
To follow up on this, Wendell specifically mentions EY’s “friendly AI” in the intro to his new article in the Ethics and Information Technology special issue on “Robot ethics and human ethics”.
I am unable to take this criticism seriously. It’s just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?
Edit: And I’m downvoted. You actually think a reply that’s 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.
The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn’t tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it’s also consistent with having some damning arguments.
Of course, to know either way, we would have to hear this person’s actual arguments, which we haven’t, in this case.
Just because a certain topic is raised, doesn’t mean that it is discussed correctly.
The argument is that their thinking has some similarities to religion. It’s a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.
The fact that EY displays an awareness of cultish dynamics doesn’t necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer’s discussion that “every cause wants to become a cult,” and I don’t like the common practice of labeled movements as “cults.” The net for “cult” is being drawn far too widely.
Yet I wouldn’t say that the use of the word “cult” means that the individual is engaging in bad reasoning. While I think “cult” is generally a misnomer, it’s generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.
We would need to hear this individual’s actual arguments to be able to evaluate whether the polemical summary is well-founded.
P.S. I wasn’t the one who downvoted you.
Edit:
I don’t know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally (“Hello World” is a program). If Eliezer isn’t a high school dropout, and has written major applications, then the credibility of this writer is lowered.
I believe you weren’t supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI’s yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.
Re: “How is it a cult?”
It looks a lot like an END OF THE WORLD cult. That is a well-known subspecies of cult—e.g. see:
http://en.wikipedia.org/wiki/Doomsday_cult
“The End of the World Cult”
http://www.youtube.com/watch?v=-3uDmyGq8Ok
The END OF THE WORLD acts as a superstimulus to human fear mechanisms—and causes caring people rush to warn their friends of the impending DOOM—spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy—and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.
The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.
My “DOOM” video has more—http://www.youtube.com/watch?v=kH31AcOmSjs
Slight sidetrack:
There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here—that the earth will be engulfed when the sun becomes a red giant.
That fate for the planet haunted me when I was a kid. People would say “But that’s billions of years in the future” and I’d feel as though they were missing the point. It’s possible that a more detailed discussion would have helped....
Recently, I’ve read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.
This seems less intellectually honest than “The human race will be long gone anyway”, but not awful. I think the most meticulous answer (aside from “that’s the far future and there’s nothing to be done about it now”) is “that’s so far in the future that we don’t know whether people will be around, but if they are, they may well find a solution.”
[1] I count this as evidence for the Flynn Effect.
Downvoted for this.
Re: “haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI.”
This opinion sounds poorly researched—e.g.: “This document was created by html2html, a Python script written by Eliezer S. Yudkowsky.”—http://yudkowsky.net/obsolete/plan.html
I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn’t worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.
I don’t think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can’t.
He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.
EDIT: This is not meant to be a defense of obvious wrong hyperbole like “has never written a single computer program”.
Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:
http://flarelang.sourceforge.net/
This isn’t contrary to Robin’s post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.