Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I’ve encountered):
1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let’s set aside that there’s still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stage of AI development “soon” since the Dartmouth conference. Why should we worry about this being on the horizon (rather than a thousand years away) now?
2) There’s such a range of views on this topic by apparent experts in AI and computer science that an analyst might conclude “there is no credible expertise on “path/timeline to super intelligent AI”. Why should we take MIRI/FHI’s arguments seriously?
3) Why are mathematician/logician/philosophers/interdisciplinary researchers the community we should be taking most seriously when it comes to these concerns? Shouldn’t we be talking to/hearing from the cutting edge AI “builders”?
4) (Related). MIRI (and also FHI, but not to such a ‘primary’ extend’) focuses on developing theoretical safety designs, and friendly-AI/safety-relevant theorem proving and maths work ahead of any efforts to actually “build” AI. Would we not be better to be more grounded in the practical development of the technology—building, stopping, testing, trying, adapting as we see what works and what doesn’t, rather than trying to lay down such far-reaching principles ahead of the technology development?
I’d focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI’s specific program isn’t heading in a clear direction and hasn’t gotten a lot of traction in the mainstream AI research community yet.
Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don’t show the roadmap to AI, despite their theoretical niceness.
For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time he spent on those investigations, and because he was one of the few pioneers in the field who used protective lead shields routinely.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.
Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I’ve encountered):
1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let’s set aside that there’s still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stage of AI development “soon” since the Dartmouth conference. Why should we worry about this being on the horizon (rather than a thousand years away) now?
2) There’s such a range of views on this topic by apparent experts in AI and computer science that an analyst might conclude “there is no credible expertise on “path/timeline to super intelligent AI”. Why should we take MIRI/FHI’s arguments seriously?
3) Why are mathematician/logician/philosophers/interdisciplinary researchers the community we should be taking most seriously when it comes to these concerns? Shouldn’t we be talking to/hearing from the cutting edge AI “builders”?
4) (Related). MIRI (and also FHI, but not to such a ‘primary’ extend’) focuses on developing theoretical safety designs, and friendly-AI/safety-relevant theorem proving and maths work ahead of any efforts to actually “build” AI. Would we not be better to be more grounded in the practical development of the technology—building, stopping, testing, trying, adapting as we see what works and what doesn’t, rather than trying to lay down such far-reaching principles ahead of the technology development?
All good points.
I’d focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI’s specific program isn’t heading in a clear direction and hasn’t gotten a lot of traction in the mainstream AI research community yet.
Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don’t show the roadmap to AI, despite their theoretical niceness.
For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
And if they come back with an snake egg instead? Surely we need to have some idea of the nature of AI and it thus how exactly it is dangerous.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
From the Wikipedia article on Roentgen:
Sounds like he was doing something right.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.