I’ve always enjoyed Kurzweil’s story about how the human genome project was “almost done” when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).
It makes me wonder if we’re “almost done” with FAI.
I don’t really know where we are with FAI. I don’t know if our progress is even knowable, since we don’t really know where we’re going. There’s certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might suddenly become very helpful.
Douglas Lenat’s Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it. I’m actually blown away and a little alarmed by the things it can apparently do now. IBM’s Watson is another machine that can interpret and answer complex queries and demonstrates real semantic awareness. These two systems alone indicate that the state of the art in what you might call “superhuman-knowledge-plus-superhuman-logical-deduction” is ripe (or almost ripe) for exploitation by human FAI researchers. (You could also call these systems “Weak Oracles” or something.)
Nobody expects Cyc or Watson to FOOM in the next few years, but other near-future Weak Oracles might still greatly accelerate our progress in exploring, developing and formalizing the technology needed to solve the Control Problem. It intuitively feels like Weak Oracle tech might actually enable the sort of rapid doubling in progress that we’ve observed in other domains.
The AlphaGo victory has made me realize that the quality of the future really hinges on which of several competing exponential trends happens to have the sharpest coefficient. Specifically, will we get really-strong-but-not-generally-intelligent Weak Oracles before we get GAI? Where is the crossover of those two curves?
Douglas Lenat’s Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it.
Can you provide a link to the powerful demonstrations of Cyc?
Among them would be giving Cyc a large amount of text and/or images to assimilate and then asking it questions like:
Query: “Government buildings damaged in terrorist events in Beirut between 1990 and 2001.” A moment’s thought will reveal how complex this query actually is, and how many ways there are to answer it incorrectly, but Cyc gives the right answer.
Query: “Pictures of strong and adventurous people.” Returns a picture of a man climbing a rock face, since it knows that rock climbing requires strength and an adventurous disposition.
Query: “What major US cities are particularly vulnerable to an anthrax attack?” This is my favorite example, because it needs to assess not only what “major US cities” are but also what the ideal conditions for the spread of anthrax are and then apply that as a filter over those cities with nuanced contextual awareness.
In general Cyc impresses me because it doesn’t use of any kind of neural network architecture, it’s just knowledge linked in explicit ontologies with a reasoning engine.
I’ve always enjoyed Kurzweil’s story about how the human genome project was “almost done” when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).
It makes me wonder if we’re “almost done” with FAI.
I don’t really know where we are with FAI. I don’t know if our progress is even knowable, since we don’t really know where we’re going. There’s certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might suddenly become very helpful.
Douglas Lenat’s Cyc, of which I was reminded by another comment in this very thread, seems to have become much more powerful than I would have expected the first time I heard of it. I’m actually blown away and a little alarmed by the things it can apparently do now. IBM’s Watson is another machine that can interpret and answer complex queries and demonstrates real semantic awareness. These two systems alone indicate that the state of the art in what you might call “superhuman-knowledge-plus-superhuman-logical-deduction” is ripe (or almost ripe) for exploitation by human FAI researchers. (You could also call these systems “Weak Oracles” or something.)
Nobody expects Cyc or Watson to FOOM in the next few years, but other near-future Weak Oracles might still greatly accelerate our progress in exploring, developing and formalizing the technology needed to solve the Control Problem. It intuitively feels like Weak Oracle tech might actually enable the sort of rapid doubling in progress that we’ve observed in other domains.
The AlphaGo victory has made me realize that the quality of the future really hinges on which of several competing exponential trends happens to have the sharpest coefficient. Specifically, will we get really-strong-but-not-generally-intelligent Weak Oracles before we get GAI? Where is the crossover of those two curves?
Can you provide a link to the powerful demonstrations of Cyc?
Lenat’s Google Talk has a lot of examples.
Among them would be giving Cyc a large amount of text and/or images to assimilate and then asking it questions like:
Query: “Government buildings damaged in terrorist events in Beirut between 1990 and 2001.” A moment’s thought will reveal how complex this query actually is, and how many ways there are to answer it incorrectly, but Cyc gives the right answer.
Query: “Pictures of strong and adventurous people.” Returns a picture of a man climbing a rock face, since it knows that rock climbing requires strength and an adventurous disposition.
Query: “What major US cities are particularly vulnerable to an anthrax attack?” This is my favorite example, because it needs to assess not only what “major US cities” are but also what the ideal conditions for the spread of anthrax are and then apply that as a filter over those cities with nuanced contextual awareness.
In general Cyc impresses me because it doesn’t use of any kind of neural network architecture, it’s just knowledge linked in explicit ontologies with a reasoning engine.
It’s good at marketing.