David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year’s Singularity Summit. He estimates the probability of human-level AI by 2100 at “somewhat more than one-half,” thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn’t answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?
If all this was supposed to be mere philosophy, I wouldn’t inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?
But the SIAI is asking for the better part of your income and resources.
If you are a hard-core consequentialist altruist who doesn’t balance against other less impartial desires you’ll wind up doing that eventually for something. Peter Singer’s “Famine, Affluence, and Morality” is decades old, and there’s still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you’re willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.
I’ll have more to say on substance tomorrow, but it’s getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
One might think that those who were inclined to give away their worldly goods to help the needy would have bred themselves out of the gene pool long ago—but evidently that is not the case.
Perhaps one can invoke the unusual modern environment. Maybe in the ancestral environment, helping others was more beneficial—since the high chance of repeated interactions made reciprocal altrusim work better. However, if people donate to help feed starving millions half way around the world, the underlying maths no longer adds up—resulting in what was previously an adaptive behaviour leading to failure in modern situations—maladaptive behaviour as a result of an unfamiliar environment.
One might expect good parents to work to keep their kids away from utilitarian cults—which feed off the material resources of their members—on the grounds that such organisations may systematically lead to a lack of grandchildren. “Interventions” may be required to extricate the entangled offspring from the feeding tentacles of these parasitic entities that exploit people’s cognitive biases for their own ends.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
This reads like an attack on utilitarian ethics, but there’s an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don’t help people?
“charity pyramid” … “good parents work to keep their kids away” … “utilitarian cults” … “feeding tentacles of these parasitic entities that exploit … for their own ends”
Wow, my propagandometer is pegged. Why did you choose this language? Isn’t exploiting people for their own ends incompatable with being utilitarian? Do you have any examples of charities structured like pyramid schemes, or as cults?
“Are you claiming that utilitarian ethics are wrong?”
“Right” and “wrong” are usually concepts that are applied with respect to an ethical system. Which ethical system am I expected to assume when trying to make sense of this questiion?
“Are you claiming that most charities are actually fraudulent and don’t help people?”
No—I was not talking about that.
“Isn’t exploiting people for their own ends incompatable with being utilitarian?”
If a charity’s goals include “famine relief”, then considerable means would be justified by that—within a utilitarian framework.
“Charity pyramids” was a loosely-chosen term. There may be some pyramid structure—but the image I wanted to convey was of a cause with leader(s) preaching the virtues of utilitarianism—being supported in their role by a “base” of “suckers”—individuals who are being duped into giving many of their resources to the cause.
Superficially, the situation represents a bit of a Darwinian puzzle: Are the “suckers” being manipulated? Have they been hypnotised? Do they benefit in some way by the affiliation? Are they fooled into treating the cause as part of their extended family? Are they simply broken? Do they aspire to displace the leader? Have their brains been hijacked by pathogenic memes? What is going on?
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
It helps that just pointing out observations like this is almost universally punished. Something to do with people on the top of pyramids having more power...
For my part I would upvote your comment another few times if I could but I note that someone else has downvoted you.
Another aspect of it is that people try and emulate charismatic leaders—in the hope of reproducing their success. If the guru says to give everything to the guru then the followers sometimes comply—because it is evident that the guru has things sussed—and is someone to be copied and emulated. Sometimes this strategy works—and it is possible for a cooperative follower to rise to power within the cult. However, if the gurus’ success is largely down to their skill at feeding off their followers, the gurus are often heavily outnumbered.
David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year’s Singularity Summit. He estimates the probability of human-level AI by 2100 at “somewhat more than one-half,” thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn’t answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?
If all this was supposed to be mere philosophy, I wouldn’t inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?
If you are a hard-core consequentialist altruist who doesn’t balance against other less impartial desires you’ll wind up doing that eventually for something. Peter Singer’s “Famine, Affluence, and Morality” is decades old, and there’s still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you’re willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.
I’ll have more to say on substance tomorrow, but it’s getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.
It seems obvious why those at the top of charity pyramids support utilitarian ethics—their funding depends on it. The puzzle here is why they find so many suckers to exploit.
One might think that those who were inclined to give away their worldly goods to help the needy would have bred themselves out of the gene pool long ago—but evidently that is not the case.
Perhaps one can invoke the unusual modern environment. Maybe in the ancestral environment, helping others was more beneficial—since the high chance of repeated interactions made reciprocal altrusim work better. However, if people donate to help feed starving millions half way around the world, the underlying maths no longer adds up—resulting in what was previously an adaptive behaviour leading to failure in modern situations—maladaptive behaviour as a result of an unfamiliar environment.
One might expect good parents to work to keep their kids away from utilitarian cults—which feed off the material resources of their members—on the grounds that such organisations may systematically lead to a lack of grandchildren. “Interventions” may be required to extricate the entangled offspring from the feeding tentacles of these parasitic entities that exploit people’s cognitive biases for their own ends.
This reads like an attack on utilitarian ethics, but there’s an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don’t help people?
Wow, my propagandometer is pegged. Why did you choose this language? Isn’t exploiting people for their own ends incompatable with being utilitarian? Do you have any examples of charities structured like pyramid schemes, or as cults?
“Right” and “wrong” are usually concepts that are applied with respect to an ethical system. Which ethical system am I expected to assume when trying to make sense of this questiion?
No—I was not talking about that.
If a charity’s goals include “famine relief”, then considerable means would be justified by that—within a utilitarian framework.
“Charity pyramids” was a loosely-chosen term. There may be some pyramid structure—but the image I wanted to convey was of a cause with leader(s) preaching the virtues of utilitarianism—being supported in their role by a “base” of “suckers”—individuals who are being duped into giving many of their resources to the cause.
Superficially, the situation represents a bit of a Darwinian puzzle: Are the “suckers” being manipulated? Have they been hypnotised? Do they benefit in some way by the affiliation? Are they fooled into treating the cause as part of their extended family? Are they simply broken? Do they aspire to displace the leader? Have their brains been hijacked by pathogenic memes? What is going on?
It helps that just pointing out observations like this is almost universally punished. Something to do with people on the top of pyramids having more power...
For my part I would upvote your comment another few times if I could but I note that someone else has downvoted you.
Another aspect of it is that people try and emulate charismatic leaders—in the hope of reproducing their success. If the guru says to give everything to the guru then the followers sometimes comply—because it is evident that the guru has things sussed—and is someone to be copied and emulated. Sometimes this strategy works—and it is possible for a cooperative follower to rise to power within the cult. However, if the gurus’ success is largely down to their skill at feeding off their followers, the gurus are often heavily outnumbered.