I thought this does hint at possible dangers of empowering large amounts of uneducated people, i.e. in the case of a global democracy. It also shows that people concerned with the reduction of existential risks should partly focus on the education of biases if they plan to tackle such problems by means of democracy. Further, it means that organisations concerned with policy making regarding global catastrophic risks should employ highly charismatic individuals for public relations and the importance of the former.
I’d also like to post another submission I wrote in response to some tense debate between members of the IEET and the SIAI regarding public and academic relations within the x-risk fraction:
I’m seriously trying to be friendly. I do not side with anyone right now.
But have some of you ever read any of EY’ writings, i.e. over at
lesswrong.com / yudkowsky.net?
What have I missed? This discussion seems emotionally overblown. Makes me
wonder if Yudkowsky actually uttered ‘really stupid things’, or if it was
just an overreaction between people who are not neurotypical.
Whatever they are, EY and Anissimov are not dumb. If EY farts during lunch
or commits otherwise disgusting acts it does not belittle what good he does.
If a murder is proclaiming that murder is wrong the fact of being a murder
does not negate the conclusion.
What I’m trying to say is that too many personal issues seem to play too
much of role in your circles.
I’m a complete outsider from Germany, with no formal education. And you know
how your movements (SIAI, IEET...) appear to me? Here’s a quote from a
friend who pretty much summarized it by saying:
“I rather think that it is their rationality that is at risk here...somehow
in most of their discussions it is the large questions that seem to
disappear into the infinite details of risk analysis, besides the kind of
paranoia they emanate is a kind of chronic fatigue symptoms of
over-saturated minds...”
You need to get back to the basics. Step back and concentrate on the
important issues. Much of the Tranhumanist-AI-Singularity movement seems to
be drowning in internal conflict over minor issues. All of the subgroups
have something in common while none is strong enough on its own to achieve
sufficient impact. First and foremost you have to gather pace together.
Deciding upon the details is something to be deferred in favor of
cooperation on the central issues of public awareness of existential risks
and the importance of responsible scientific research and ethical progress.
Step back and look at what you are doing. You indulge in tearful debates
over semantics. It’s insignificant as long as you’re not having a particular
goal in mind, as in convincing the public of a certain idea by means of
rhetoric.
Interesting, this post does indeed appear on the first page of results when searching for “existential risk”:
http://www.google.com/search?q=existential+risk
I don’t think it is inappropiate though. As public relations are critical to tackle such problems.
I submitted the following article to the existential risk mailing list run by the IEET: MIT political scientists demonstrate how much candidate appearances affect election outcomes, globally.
James J. Hughes wrote me an e-mail saying:
My reply:
I’d also like to post another submission I wrote in response to some tense debate between members of the IEET and the SIAI regarding public and academic relations within the x-risk fraction:
Without wishing to be harsh, I’ll say that I don’t see what this comment adds to the discussion.