It seems that in 2014 he believed that p(doom) was less than 20%
teradimich
I do expect some of the potential readers of this post to live in a very unsafe environment—e.g. parts of current-day Ukraine, or if they live together with someone abusive—where they are actually in constant danger.
I live ~14 kilometers from the front line, in Donetsk. Yeah, it’s pretty… stressful.
But I think I’m much more likely to be killed by an unaligned superintelligence than an artillery barrage.
Most people survive urban battles, so I have a good chance.
And in fact, many people worry even less than I do! People get tired of feeling in danger all the time.
’“Then why are you doing the research?” Bostrom asked.
“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”′
‘I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.” He looked as if he might elaborate. Then a scientist called out, “Let’s all get drinks!”’
Hinton seems to be more responsible now!
The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with “half of us think there’s a 10+% chance of our work resulting in an existential catastrophe”.
In fairness, this is not quite half the researchers. This is half the agreed survey.
I expect that worried researchers are more likely to agree to participate in the survey.
Thanks for your answer, this is important to me.
I am not an American (so excuse me for my bad English!), so my opinion about the admissibility of attack on the US data centers is not so important. This is not my country.
But reading about the bombing of Russian data centers as an example was unpleasant. It sounds like a Western bias for me. And not only for me.
If the text is aimed at readers not only from the First World countries, well, perhaps the authors should do such a clarification as you did! Then it will not look like political hypocrisy. Or not write about air strikes at all, because people are distracted for discussing this.
I’m not an American, so my consent doesn’t mean much :)
Suppose China and Russia accepted the Yudkowsky’s initiative. But the USA is not. Would you support to bomb a American data center?
I can provide several links. And you choose those that are suitable. If suitable. The problem is that I retained not the most complete justifications, but the most … certain and brief. I will try not to repeat those that are already in the answers here.
Jaron Lanier and Neil Gershenfeld
Magnus Vinding and his listMaybe Abram Demski? But he changed his mind, probably.
Well, Stuart Russell. But this is a book. I can quote.I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”
There are also a large number of reasonable people who directly called themselves optimists or pointed out a relatively small probability of death from AI. But usually they did not justify this in ~ 500 words…
I also recommend this book.
My fault. I should just copy separate quotes and links here.
I have collected many quotes with links about the prospects of AGI. Most people were optimistic.
Glad you understood me. Sorry for my english!
Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems.
We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.
It seems to me that the brains of many animals can be aligned with the goals of someone much more stupid themselves.
People and pets. Parasites and animals. Even ants and fungus.
Perhaps the connection that we would like to have with superintellence, is observed on a much smaller scale.
I apologize for the stupid question. But…
Do we have more chances to survive in the world, which is closer to Orwell’s ’1984′?
It seems to me that we are moving towards more global surveillance and control. China’s regime in 2021 may seem extremely liberal for an observer in 2040.
I guess I missed the term gray goo. I apologize for this and for my bad English.
Is it possible to replace it on the ‘using nanotechnologies to attain a decisive strategic advantage’?
I mean the discussion of the prospects for nanotechnologies on SL4 20+ years ago. This is especially:My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.
I understand that since then the views of EY have changed in many ways. But I am interested in the views of experts on the possibility of using nanotechnology for those scenarios that he implies now. That little thing I found.
Nanosystems are definitely possible, if you doubt that read Drexler’s Nanosystems and perhaps Engines of Creation and think about physics.
Is there something like the result of a survey of experts about the feasibility of drexlerian nanotechnology? Are there any consensus among specialists about the possibility of a gray goo scenario?
Drexler and Yudkowsky both extremely overestimated the impact of molecular nanotechnology in the past.
I do not know the opinions of experts on this issue. And I lack competence for such conclusions, sorry.
I have already tried to collect the most complete collection of quotes here. But it is already very outdated.