I’d focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI’s specific program isn’t heading in a clear direction and hasn’t gotten a lot of traction in the mainstream AI research community yet.
Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don’t show the roadmap to AI, despite their theoretical niceness.
For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time he spent on those investigations, and because he was one of the few pioneers in the field who used protective lead shields routinely.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.
All good points.
I’d focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI’s specific program isn’t heading in a clear direction and hasn’t gotten a lot of traction in the mainstream AI research community yet.
Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don’t show the roadmap to AI, despite their theoretical niceness.
For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom’s book ‘Superintelligence’. For a more precise response, you can make explicit the analogy.
And if they come back with an snake egg instead? Surely we need to have some idea of the nature of AI and it thus how exactly it is dangerous.
Can you summarize what you mean or link to the excerpt?
And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI’s program as it exists doesn’t convince me otherwise.
From the Wikipedia article on Roentgen:
Sounds like he was doing something right.
My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom’s story and AI.