Questions on SI Research
Hello LessWrong,
As one of my assignments at the Singularity Institute (SI), I am writing a research FAQ answering the most frequently asked questions regarding the Singularity Institute’s research program.
For a short summary of what SI is about, see our concise summary.
Here are some examples of questions I’m currently planning to include:
1) who conducts research at SI?
2) what are the specific research topics being investigated?
3) what is the history of SI’s research program?
4) where does SI see its research program in 5, 10, and 20 years?
5) what other organizations conduct research similar to SI?
Please submit other questions that come to mind below. Unfortunately, due to limited time, we cannot answer every question posed to us. However, I hope to answer some of the questions that receive the most upvotes. Thank you for your participation!
What is the role of research in SI’s mission?
How are resources being split between direct AI research and other topics?
Why research anything other than AI itself?
Where will research be published or made available?
Will anything be kept confidential, and how and why? What security measures are in place?
How do we know SI is doing good research if much of it is confidential?
It might be advisable to spilt this so the questions can be voted on separately.
How easily could the SI lose important data (like unpublished research) in, say, a fire or computer malfunction?
I would expect the FAQ to explicitly address the standard pattern-matching questions, such as ‘Is Singularity “the Rapture for nerds”’? Or you can pretend that people don’t pattern-match as their first reaction.
I gave a quick talk about the singularity a few days ago. One guy kept heckling me about ‘rapture for geeks’. It’s definitely a thing.
This is a good illustration of how hucksters and differentiated meme virulence mean we can’t have nice things.
People really want X (e.g. to live forever, to lose weight, etc.). Hucksters take advantage of this by promising X, and memes that persuade people they can achieve X by adopting the meme (e.g. Christianity in the case of living forever) spread. Then people develop antibodies to the hucksters or memes, and the antibodies end up attacking anything that promises X (since it resembles the mistaken claims).
As Randall Munroe joked:
Hence my proposed slogan for cryonics, “Cryonics: A scheme for living forever that might actually work”. Of course, we should only use the slogan if it might actually work, which looks highly doubtful to one LWer who is a neuroscientist.
If that one LWer has a technical critique of cryonics, they should write it up. cf, oh, just about everything on my blog. http://blog.ciphergoth.org/
That’s a really nice explanation of the situation.
If one guy heckled you about ‘rapture for the geeks’ I’d agree—it’s definitely a thing. If one guy kept heckling you about the same thing then sure, I’d still agree but I’d suggest that all the additional heckles of the same trite jest tell little about the ‘thing’ and present an almost entirely social problem regarding presentations and how to combat heckling.
This would be a research FAQ, not a general Singularity or organizational FAQ, though there will be a separate FAQ for the latter, and I will consider including that question.
How do you intend to get ahead of huge corporations like Google, who also do AI research?
I’m kind of curious how do you explain why people with more proven expertise and accomplishment in relevant areas don’t seem to be offering any ai risk reduction for anyone to buy yet. Quack remedies always came before cures, through the history.
This kind of question—and how much more fun people seem to have when arguing about this kind of question—would just be a distraction from the intent of the document. It is best to save the apologetics for elsewhere and give the plain information here.
Well, it’s not so much of a question as of an answer to ‘what to think about this specific offering of a cure’. People whom can generate that sort of question—sceptical minded people—won’t donate anyway.