The mainstream press has now picked up on Musk’s recent statement. See e.g. this Daily Mail article: ‘Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…’
XiXiDu
Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:
(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models
(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks
(3) Google: A Neural Image Caption Generator
(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions
[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.
What are you worried he might do?
Start a witch hunt against the field of AI? Oh wait...he’s kind of doing this already.
If he believes what he’s said, he should really throw lots of money at FHI and MIRI.
Seriously? How much money do they need to solve “friendly AI” within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You’ll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.
I wonder what would have been Musk’s reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber’s universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.
The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.
If he is seriously convinced that doom might be no more than 5 years away, then I share his worries about what an agent with massive resources at its disposal might do in order to protect itself. Just that in my case this agent is called Elon Musk.
A chiropractor?
Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?
...
However, I haven’t done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I’m not really sure where I picked up.
Same for me. I was a little bit shocked to read that someone on LessWrong goes to a chiropractor. But for me this attitude is also based on something I considered to be common knowledge, such as astrology being pseudoscience. And the Wikipedia article on chiropractic did not change this attitude much.
Do “all those who have recently voiced their worries about AI risks” actually believe we live in a simulation in a mathematical universe? (“Or something along these lines...”?)
Although I don’t know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.
There are whole books now about this topic. What’s missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.
So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don’t see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That’s not enough!
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.
I don’t know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?
The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don’t just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don’t do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.
I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.
Do you have some convincing counterarguments?
Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.
Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.
Musk’s accomplishments don’t necessarily make him an expert on the demonology of AI’s. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico’s.
Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars. Or something along these lines...
The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.
Could you provide examples of advanced math that you were unable to learn? Why do you think you failed?
I appreciate having Khan academy for looking up math concept that on which I need a refresher, but I’ve herd (or maybe just assumed?) that the higher level teaching was a bit mediocre. You disagree?
Comparing Khan Academy’s linear algebra course to the free book that I recommended, I believe that Khan Academy will be more difficult to understand if you don’t already have some background knowledge of linear algebra. This is not true for the calculus course though. Comparing both calculus and linear algebra to the books I recommend, I believe that Khan Academy only provides a rough sketch of the topics with much less rigor than can be found in books.
Regarding the quality of Khan Academy. I believe it is varying between excellent and mediocre. But I haven’t read enough rigorous material to judge this confidently.
The advantage of Khan Academy is that you get a quick and useful overview. There are books that are also concise and provide an overview, often in the form of so called lecture notes. But they are incredible difficult to understand (assume a lot of prerequisites).
As a more rigorous alternative to Khan Academy try coursera.org.
What’s the value of taking classes in math vs. teaching myself (or maybe teaching myself with the occasional help of a tutor)?
I’ve never visited a class or got the help of a tutor. I think you can do just fine without one if you use Google and test your knowledge by buying solved problem books. There are a lot of such books:
Some massive open online courses now offer personal tutors if you pay a monthly fee. udacity.com is one example here.
I also want to add the following recommendations to my original sequence, since you specifically asked about Bayesian statistics:
Doing Bayesian Data Analysis: A Tutorial with R and BUGS (new version will be released in November)
I am not sure about the prerequisites you need for “rationality” but take a look at the following courses:
(1) Schaum’s Outline of Probability, Random Variables, and Random Processes:
The background required to study the book is one year calculus, elementary differential equations, matrix analysis...
(2) udacity’s Intro to Artificial Intelligence:
Some of the topics in Introduction to Artificial Intelligence will build on probability theory and linear algebra.
(3) udacity’s Machine Learning: Supervised Learning :
A strong familiarity with Probability Theory, Linear Algebra and Statistics is required.
My suggestion is to use khanacademy.org in the following order: Precalculus->Differential calculus->Integral calculus->Linear Algebra->Multivariable calculus->Differential equations->Probability->Statistics.
If you prefer books:
A First Course in Linear Algebra (is free and also teaches proof techniques)
Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus
Ordinary Differential Equations (Dover Books on Mathematics)
Schaum’s Outline of Probability, Random Variables, and Random Processes
Statistics comes last, here is why. Take for example the proof of minimizing squared error to regression line. You will at least need to understand how to solve partial derivatives and systems of equations.
(Note: Books 4-7 are based on my personal research on what to read. I haven’t personally read those particular books yet. But they are praised a lot and relatively cheap and concise.)
So, this “Connection Theory” looks like run-of-the-mill crackpottery. Why are people paying attention to it?
From the post:
“I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”
Sounds like a persiflage of MIRI.
What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky’s cached thoughts.
LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.
To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.
I am not talking about censorship here. I am talking about something unproblematic. Since once the aim of LessWrong is clear, to tackle technical problems, moderation becomes an understandable necessity. And I’d be surprised if any moderation will be necessary once only highly technical problems are discussed.
Doing this will make people hold LessWrong in high esteem. Because nothing is as effective at proving that you are smart and rational than getting things done.
ETA How about trying to solve the Pascal’s mugging problem? It’s highly specific, technical, and does pertain rationality.
Of course, mentioning the articles on ethical injuctions would be too boring.
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.
The problem isn’t that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko’s post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko’s post. Someone in the audience makes the following statement:
Whenever I hear the Singularity Institute talk I feel like they are a bunch of religious nutters...
Or consider the following comment by Ben Goertzel from 2004:
Anyway, I must say, this display of egomania and unpleasantness on the part of SIAI folks makes me quite glad that SIAI doesn’t actually have a viable approach to creating AGI (so far, anyway…).
And this is Yudkowsky’s reply:
[...] Striving toward total rationality and total altruism comes easily to me. [...] I’ll try not to be an arrogant bastard, but I’m definitely arrogant. I’m incredibly brilliant and yes, I’m proud of it, and what’s more, I enjoy showing off and bragging about it. I don’t know if that’s who I aspire to be, but it’s surely who I am. I don’t demand that everyone acknowledge my incredible brilliance, but I’m not going to cut against the grain of my nature, either. The next time someone incredulously asks, “You think you’re so smart, huh?” I’m going to answer, “Hell yes, and I am pursuing a task appropriate to my talents.” If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence.
LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.
Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.
Roko’s post explicitly mentioned trading with unfriendly AI’s.
Eliezer Yudkowsky’s reasons for banning Roko’s post have always been somewhat vague. But I don’t think he did it solely because it could cause some people nightmares.
(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):
I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
…and further…
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.
His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?
(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:
It’s clear that the basilisk was censored, not just to save unlucky susceptible people from the trauma of imagining that they were being acausally blackmailed, but because Eliezer judged that acausal blackmail might actually be possible. The thinking was: maybe it’s possible, maybe it’s not, but it’s bad enough and possible enough that the idea should be squelched, lest some of the readers actually stumble into an abusive acausal relationship with a distant evil AI.
If Yudkowsky really thought it was irrational to worry about any part of it, why didn’t he allow people to discuss it on LessWrong, where he and others could debunk it?
- Sep 16, 2014, 2:34 PM; 5 points) 's comment on What are your contrarian views? by (
- Sep 11, 2020, 11:03 PM; 2 points) 's comment on A few misconceptions surrounding Roko’s basilisk by (
- Sep 17, 2014, 8:21 PM; 1 point) 's comment on What are your contrarian views? by (
What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.
He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.
I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.
The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).
ETA: Just take those people who destroy GMO test fields. Musk won’t do something like that. But other people, who would commit such acts, might be inspired by his remarks.