This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can:
persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity
convince the most promising AI researchers (especially promising young researchers) to seek different careers
hire the most promising AI researchers to do research in secret
use the argument on funding agencies and policy makers
publicize the argument enough so that the most promising researchers don’t go into AI in the first place
You (as a group) need “street cred” to be persuasive. To a typical person you look like a modern day version of a doomsday cult. Publishing recognized AI work would be a good place to start.
Publishing AI work would help increase credibility, but it’s a costly way of doing so since it directly promotes AI progress. At least some mainstream AI researchers already take SIAI seriously. (Evidence: 12) So I suggest bringing better arguments to them and convincing them to lend further credibility.
By the way, what counts as “AI progress?” Do you consider statistics and machine learning a part of “AI progress”? Is theoretical work okay? What about building self-driving cars or speech recognition software? Where is, as someone here would call it, the Shelling point?
Do you consider stopping “AI progress” important enough to put something on the line besides talking about it?
You raise a very good question. There doesn’t seem to be a natural Schelling point, and actually the argument can be generalized to cover other areas of technological development that wouldn’t ordinarily be considered to fall under AI at all, for example computer hardware. So somebody can always say “Hey, all those other areas are just as dangerous. Why are you picking on me?” I’m not sure what to do about this.
Do you consider stopping “AI progress” important enough to put something on the line besides talking about it?
I’m not totally sure what you mean by “put something on the line” but for example I’ve turned down offers to co-author academic papers on UDT and argued against such papers being written/published, even though I’d like to see my name and ideas in print as much as anybody. BTW, realistically I don’t expect to stop AI progress, but just hope to slow it down some.
My understanding of Shelling points is there are, by definition, no natural Shelling points. You pick an arbitrary point to defend as a strategy vs slippery slopes. In Yvain’s post he picked an arbitrary %, I think 95.
There is a slippery slope here. Where will you defend?
The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world’s best AI talk is someone’s first notable contribution.
These are interesting suggestions, but they don’t exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.
My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but—apart from spreading the arguments and the option of career change—it is not clear how this knowledge should affect their actions.
If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there’s not much point in coming up with better arguments, since we can’t expect AI researchers to change their behaviors anyway.
The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
This seems like a hard problem, but certainly worth thinking about.
convince the most promising AI researchers (especially promising young researchers) to seek different careers
Relinquishment? My estiamte of the effectiveness of that hovers around zero. I don’t see any reason for thinking that it has any hope of being effective.
Especially not if the pitch is: YOU guys all relinquish the technology—AND LET US DEVELOP IT!!!
That will just smack of complete hypocracy.
Cosmetically splitting the organisation into the neo-luddute activists and the actual development team might help to mitigate this potential PR problem.
hire the most promising AI researchers to do research in secret
Surely secret progress is the worst kind—most likely to lead to a disruptive and unpleasant outcome for the majority—and to uncaught mistakes.
How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?
How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”. I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.
With Siri, Apple is using the results of over 40 years of research funded by DARPA via SRI International’s Artificial Intelligence Center through the Personalized Assistant that Learns Program and Cognitive Agent that Learns and Organizes Program CALO.
When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.
It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?
The basement approach seems ridiculous given the above.
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”.
I didn’t mean to endorse that. What I was thinking when I wrote “hire the most promising AI researchers to do research in secret” was that if there are any extremely promising AI researchers who are convinced by the argument but don’t want to give up their life’s work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.
I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights
Inductive inference is “just a math problem”. That’s the part that models the world—which is what our brain spends most of its time doing. However, it’s probably not “one or two deep insights”. Inductive inference systems seem to be complex and challenging to build.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don’t even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.
This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can:
persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity
convince the most promising AI researchers (especially promising young researchers) to seek different careers
hire the most promising AI researchers to do research in secret
use the argument on funding agencies and policy makers
publicize the argument enough so that the most promising researchers don’t go into AI in the first place
So … the name is misleading—it’s actually the Singularity Institute against Artificial Intelligence.
See this thread.
The Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration.
or for exclusively friendly AI.
You (as a group) need “street cred” to be persuasive. To a typical person you look like a modern day version of a doomsday cult. Publishing recognized AI work would be a good place to start.
Publishing AI work would help increase credibility, but it’s a costly way of doing so since it directly promotes AI progress. At least some mainstream AI researchers already take SIAI seriously. (Evidence: 1 2) So I suggest bringing better arguments to them and convincing them to lend further credibility.
By the way, what counts as “AI progress?” Do you consider statistics and machine learning a part of “AI progress”? Is theoretical work okay? What about building self-driving cars or speech recognition software? Where is, as someone here would call it, the Shelling point?
Do you consider stopping “AI progress” important enough to put something on the line besides talking about it?
You raise a very good question. There doesn’t seem to be a natural Schelling point, and actually the argument can be generalized to cover other areas of technological development that wouldn’t ordinarily be considered to fall under AI at all, for example computer hardware. So somebody can always say “Hey, all those other areas are just as dangerous. Why are you picking on me?” I’m not sure what to do about this.
I’m not totally sure what you mean by “put something on the line” but for example I’ve turned down offers to co-author academic papers on UDT and argued against such papers being written/published, even though I’d like to see my name and ideas in print as much as anybody. BTW, realistically I don’t expect to stop AI progress, but just hope to slow it down some.
My understanding of Shelling points is there are, by definition, no natural Shelling points. You pick an arbitrary point to defend as a strategy vs slippery slopes. In Yvain’s post he picked an arbitrary %, I think 95.
There is a slippery slope here. Where will you defend?
The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world’s best AI talk is someone’s first notable contribution.
These are interesting suggestions, but they don’t exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.
My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but—apart from spreading the arguments and the option of career change—it is not clear how this knowledge should affect their actions.
If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work—in a way that makes use of their existing skillset and doesn’t kill their careers.
Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there’s not much point in coming up with better arguments, since we can’t expect AI researchers to change their behaviors anyway.
This seems like a hard problem, but certainly worth thinking about.
Relinquishment? My estiamte of the effectiveness of that hovers around zero. I don’t see any reason for thinking that it has any hope of being effective.
Especially not if the pitch is: YOU guys all relinquish the technology—AND LET US DEVELOP IT!!!
That will just smack of complete hypocracy.
Cosmetically splitting the organisation into the neo-luddute activists and the actual development team might help to mitigate this potential PR problem.
Surely secret progress is the worst kind—most likely to lead to a disruptive and unpleasant outcome for the majority—and to uncaught mistakes.
How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”. I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.
Every success in AI so far relied on a huge team. IBM Watson, Siri, Big Dog or the various self-driving cars:
1)
2)
It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?
The basement approach seems ridiculous given the above.
IBM Watson started in a rather small team (2-3 people); IBM started dumping resources on them once they saw serious potential.
I didn’t mean to endorse that. What I was thinking when I wrote “hire the most promising AI researchers to do research in secret” was that if there are any extremely promising AI researchers who are convinced by the argument but don’t want to give up their life’s work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.
My thoughts on what the main effort should be is still described in Some Thoughts on Singularity Strategies.
Inductive inference is “just a math problem”. That’s the part that models the world—which is what our brain spends most of its time doing. However, it’s probably not “one or two deep insights”. Inductive inference systems seem to be complex and challenging to build.
Everything is a math problem. But that doesn’t mean that you can build a brain by sitting in your basement and literally think it up.
A well-specified math problem, then. By contrast with fusion or space travel.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don’t even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.
A small group doing secret research sounds pretty screwed to me—with its main hope being an acquisition or a merger.