And I think that theory is going to emerge after we’ve experimented with some AGI systems that are fairly advanced, yet well below the “smart computer scientist” level.
At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the “Human Being in an Inhuman Age” Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).
It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won’t know how dangerous an AGI (CGI—Constructed General Intelligence… Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.
Sort of like how the Wright Brothers didn’t really learn how they needed to approach building an airplane until they began to build airplanes. The final Wright Flyer didn’t just leap out of a box. It is not likely that an AI will just leap out of a box either (whether it is being built at a huge Corporate or University lab, or in someone’s home lab).
Also, it is possible that AI may come in the form of a sub-symbolic system which is so opaque that even it won’t be able to easily tell what can or cannot be optimized.
Ron Arkin (From Georgia Tech) discussed this briefly at the conference at Bard College I mentioned.
MB
¹ I should really write up something about that conference here. I was shocked at how many highly educated people so completely missed the point, and became caught up in something that makes The Scary Idea seem positively benign in comparison.
From Ben Goertzel,
At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the “Human Being in an Inhuman Age” Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).
It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won’t know how dangerous an AGI (CGI—Constructed General Intelligence… Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.
Sort of like how the Wright Brothers didn’t really learn how they needed to approach building an airplane until they began to build airplanes. The final Wright Flyer didn’t just leap out of a box. It is not likely that an AI will just leap out of a box either (whether it is being built at a huge Corporate or University lab, or in someone’s home lab).
Also, it is possible that AI may come in the form of a sub-symbolic system which is so opaque that even it won’t be able to easily tell what can or cannot be optimized.
Ron Arkin (From Georgia Tech) discussed this briefly at the conference at Bard College I mentioned.
MB
¹ I should really write up something about that conference here. I was shocked at how many highly educated people so completely missed the point, and became caught up in something that makes The Scary Idea seem positively benign in comparison.