Q&A #2 with Singularity Institute Executive Director
Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I’d like to do another one — though I can’t promise video this time. I think that the Singularity Institute has a lot of transparency “catching up” to do.
The Rules (same as before)
1) One question per comment (to allow voting to carry more information about people’s preferences).
2) Try to be as clear and concise as possible. If your question can’t be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer’s Singularity Summit 2011 talk.
4) Please provides links to things referenced by your question.
5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.
I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.
You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.
- 15 Dec 2011 4:22 UTC; 0 points) 's comment on Open Thread: December 2011 by (
In the previous video, you said that publishing in mainstream journals might be a waste of time, due to the amount of “post-production” involved. In addition, you said that SIAI would prefer to keep its AGI research secret—otherwise, someone might read it, implement the un-Friendly AGI, and doom us all. You followed that up by saying that SIAI is more interested in “technical problems in mathematics, computer science, and philosophy” than in experimental AI research.
In light of the above, what does the SIAI actually do ? You don’t submit your work to rigorous scrutiny by your peers in the field (you need peer review for that); you either aren’t doing any AGI research, or are keeping it so secret that no one knows about it (which makes it impossible to gauge your progress, if any), and you aren’t developing any practical applications of AI, either (since you’d need experimentation for that). So, what is it that you are actually working on, other than growing the SIAI itself ?
What is the point of having a Q&A if you avoid answering difficult questions like this?
Presumably you relatively recently gained access to research Singinst does not make public, if any. Were you surprised at the level of progress already made, either positively or negatively?
Yes, strongly and positively.
this is encouraging. From outside it looks like SIAI is stuck.
That was my impression too, and then I landed in Berkeley and thought, “Woah! What the hell? Why haven’t you guys published all that shit?”
And then I started trying to write it up and I was like, “Oh yeah. Writing stuff up takes lots of time and effort.”
So you really do need more journal-monkeys eh? Maybe I should think about the visiting fellows thing. (I’m poor so I can’t give money yet).
Why can’t you just post a quick blurb that you’ve solved such-and-such problem and the solution is along these lines? Surely it doesn’t have to be journal articles? Maybe there is a component of secrecy?
By ‘writing these things up’ I don’t mean journal articles, I mean blog posts or working papers. The problem is that it takes significant time and effort just to explain the problem and our results somewhat clearly.
If you haven’t explained your results, are you sure you actually have them? That sounds to me like “I already figured out the algorithm, I won’t learn anything by coding it.”
I tend to agree with this, too, though my own brain does “thinking by writing” more than other brains, I think.
that bad eh? see you next year.
Do you think that the same thing might be the case for other x-risks organizations? I recall that the previous analysis of other future tech safety/x-risks organizations didn’t seem to find anything very promising—might it be the case that those organizations also have stuff going on behind the scenes? If so, this seems like it might be a significant barrier to the greater x-risks community, since these organizations may be duplicating one another’s results or otherwise inefficiently allocating their respective resources, volunteers, etc.
It’s always the case that more research is being done than gets published. I know it’s true for FHI, too. It’s just especially true of SI.
I was thinking more about groups like Lifeboat or IEET, who don’t really appear to be doing any research at all, as opposed to FHI/SIAI, who do at least occasionally publish.
Is that research going to be made public?
The more precise question would be “what schedules are you considering for making that research public,” since presumably after SI successfully builds their basement GAI they’ll publish everything.
Presumably if SI builds a basement GAI publishing will not be a priority as we will either be busy bobsledding down rainbows or not being alive.
It’s all a matter of funding and recruiting. With no increase in funding, it would remain very difficult to publish all that research, as non-published conceptual research will easily outpace published research unless we have a dedicated writer or two to write things up as we discover them.
What would SI do if it became apparent that AGI is at most 10 years away? For example, some researchers demonstrate the feasibility of AGI and show that they only need a few years to implement it.
(Some AGI researchers like Shane Legg assign a 10% chance of AGI by ~2018.)
I find this a little odd, since there are still several highly-voted (>= 15 points, say) questions from last time unanswered. Why not answer them first? Also, is there any reason for why someone shouldn’t just take all of them and repost them in this thread (e.g. if you’re unwilling to answer many of them, in which case it would be mostly be a wasted effort and clutter the page needlessly)?
In my last paragraph I encouraged people to re-post from the last round. Some of them might not be voted highly in the second round even if they were voted highly in the first round, because of the answers I gave in round 1.
Would the Institute consider hiring telecommuters (both in and out the US)?
Update: this question was left unanswered in the second Q&A.
What kind of budget would be required to solve the friendly AI problem? Are we talking millions, billions or trillions?
Related question: Which groups or organizations are likely to develop AGI first and how is SIAI planning on reaching out to them?
What is SIAI’s discount rate? If I offered you $100 today in return for r*$100 in a year’s time, for what r are you indifferent? Are you borrowing money, saving it, or neither?
EDIT: For context, I seem to recall Vassar once suggesting 40%.
I don’t think you’re putting the r in the right place. If their discount rate is 40%, you should be comparing $100 now with $250 next year, or $40 now with $100 next year.
Given the huge potential of FAI for changing the world, are you worried that existing governments might see SIAI as a revolutionary threat once it starts to look like you have a serious chance of completing the goal?
You mentioned recently that SIAI is pushing toward publishing an “Open Problems in FAI” document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it’s less impactful than expected, what lesson(s) might you draw from this?
How much do members’ predictions of when the singularity will happen differ within the Singularity Institute?
Eliezer Yudkowsky wrote:
...
There is more there, best to start here and read all we way down to the bottom of that thread. I think that discussion captures some of the best arguments in favor of friendly AI in the most concise way you can currently find.
Since it’s difficult to predict the date of the invention of AGI, has SI thought about/made plans for how to work on the FAI problem for many decades, or perhaps even centuries, if necessary?
As a subset of this question, do you think that establishing a school with the express purpose of training future rationalists/AGI programmers from an early age is a good idea? Don’t you think that people who’ve been raised with strong epistemic hygiene should be building AGI rather than people who didn’t acquire such hygiene until later in life?
The only reasons I can see for it not working would be:
predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program).
belief that our current researchers are up to the challenge. (even then, having lots of people who’ve had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?)
EDIT (for clarification): Eliezer has said:
“I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement”
Just as they would be building an intelligence greater than themselves, so to must we build human intelligences greater than ourselves.
I can’t speak for the SIAI, but to me this sounds like a suboptimal use of resources, and bad PR. It trips my “this would sound cultish to the average person” buzzer. Starting a school that claimed it “emphasized critical thinking” to teach rationalists might be a good idea for someone with administrative talents who wanted to work on x-risk, but I can’t see SIAI doing it.
How would you distribute resources? I think this is a natural response if one accepts the premise that the main bottleneck to AGI is a few key insights by geniuses (as Eliezer says).
Why do we care if people who aren’t logical enough to see the reasoning behind the school think we’re cultish?
In 2009 EY asked “What’s the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?”
rhollerith_dot_com responded “That the EV of the humans is coherent and does not care how much suffering exists in the universe.”
Vassar responded to this with the scariest thing I’ve read on LessWrong which was:
“But you believe that, don’t you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed.”
Do you agree with Vassar’s reply?
Vassar’s purpose with the first of the two sentences you quote is to point out that I was playing the game wrong. Specifically, the mere fact that I was replying with something to which I had already assigned significant probability before starting the exercise was evidence to Vassar that I had not properly grasped the spirit of the exercise.
The second sentence of the quote can be interpreted as a continuation of the theme of “You’re playing the game wrong, Hollerith,” if as seems likely to me now, Vassar saw the purpose (or one of the purposes) of the game as coming up with a statement whose probability (as judged by the player himself) outside the context of the game is as low as possible.
Vassar is very skilled at understanding other people’s points of view. Moreover, he saw his job at this time in large part as a negotiator among the singularitarians, which probably caused him to try to get even better at understanding unusual points of view. Finally, during the two years leading up to this exchange that you quote I had been spamming Overcoming Bias pretty hard with my outre system of valuing things (which by the way I have since abandoned—I am pretty much a humanist now) so of course Vassar had had plenty of exposure to my point of view.
Have you asked Vassar what he meant by the 2 sentence you quoted?
Living in the Bay Area as I do, I have had a couple of conversations with Vassar, I applied to the visiting fellows program when Vassar was the main determiner of who got in (I did not), and I have absolutely no evidence that the above sentence means anything more than the fact that Vassar at time of the sentence’s writing spent a lot time time trying to understand many different points of view—the more different from his own, the better—and maybe perhaps that like some other extremely bright people (Bernhard Shaw being one) he gets a kick out of pursuing lines of thought with people that despite the line’s seeming absurd or monstrous at first have a certain odd or subtle integrity or have a faint ring of truth to them.
If an infallible oracle told you humanity was about to enter a period of extended stagnation comparable to the Dark Ages, what projects would you prioritize right now to ensure humanity’s long term survival?
Upvoted out of curiosity, but this question seems along the lines “what books would you take if you were marooned on an island” (I do not see a connection to SIAI mission, unless you give this scenario high probability)
If SIAI cannot perform any kind of “Trinity Test” (Manhattan Project), and demands secrecy to protect the world from evil obtaining any FAI intelligence, how does the organization quell the idea that it is augmenting Pascal’s Wager, while keeping the fear paradox, for technology?
Since SIAI needs secrecy, transparency cannot be absolute, yet you will need considerable funding. So, how does SIAI plan to avoid looking like Enron when asking for funding to research a much needed “Negative Ionic Tractor Disruptor” (South Park, 311), and when the philanthropists, some who may be “evil”, want to see how the sausage is made?
Sorry, if this is a juvenile question, it is the same question twice. I will try to get to the literature you recommend on the “So You Want To Save The World” site soon. -how do you navigate this necessary contradiction of transparency and stay a viable organization?
Minds are not chronologically commutative with respect to input data. Reading libertarian philosophy followed by Marxist philosophy will give you a different connectome than vice versa. As a result, you will have distinct values in each scenario and act accordingly. Put another way, human values are extremely dependent on initial input parameters (your early social and educational history). Childhood brainwashing can give the resulting adult arbitrary values (as evinced by such quirks like suicide bombers and voluntary eunuchs). However, by providing such a malleable organism, evolution found a very cute trick by which it allowed for seemingly impossible computation. (development of mathematics, science, etc.)
I assume that in the definition of GAI, it is implicit that the AI can do mathematics and science as good or better than humans can, as to achieve its goals that require a physical restructuring of reality. Since the only example of a computational process that is capable of generating these things (humans) is so malleable in its values, what basis (mathematical or otherwise) does the SIAI have for assuming that Friendliness is achievable? Keep in mind that a GAI should be able to think and comprehend all things humans can and have thought (including the architectural problems in Friendliness), or at least something functionally isomorphic.
I see a problem with LW(don’t know if you consider this part of SI) in that non-conforming comments are often downvoted, regardless of them being right or wrong. I think part of the blame is on that article by EY:
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
My big concern: what safeguards are in place to distinguish those “weeds” that are real “weeds” from those that only look like “weeds” because they go against incorrect beliefs in LW? In other words, assuming that there are incorrect beliefs in LW/SI shouldn’t there be more room to allow for contrarian POVs to be expressed?
Examples? Well-argued contrarian comments and posts seem to get pretty regularly upvoted, as far as I’ve observed.
Considering the author of the comment, I would guess the examples have to do with 9/11 being an inside job.
Hah, what a low status belief.
I won’t get into an argument about this, especially since we both already argued lots about certain issues which I don’t want to get into here(in fact I am not even allowed according to some LWers who have taken the role of judges) and LW has many smart people who could argue as effectively for either side if they wished to do so. I wrote my comment based on personal experience on this site, and I’ve been a member here since the days when it was still overcomingbias.com.
You see here is my point, someone writes a comment expressing a certain sentiment or POV, and you can start arguing over it or consider that “maybe he has a point and we should give some consideration to this issue.”
Btw, I don’t know what is going on but this is my first comment of the day and as soon as I try to post it I get the message “You are trying to submit too fast. try again in 5 minutes.” The same happens every time I try to comment. Is this some kind of filter? Please turn it off.
I think there’s a filter that depends of karma, so that heavily downvoted posters have to slow down their posting rate, but since you have positive karma I’m not sure why it’s triggering for you. Maybe there’s a bug, dunno.
If it hasn’t stopped giving you that message, try different combinations of browser, computer, and throwaway account. If it’s a bug, one of the former should help. If it’s a filter, which sounds unlikely given your karma and the message text, that won’t help but you should be able to post through a throwaway account without getting the message.
It would be proper to say “how much” or similar rather than “if.”