Thank you for writing this. I’m afraid I still don’t understand QNRs. Here’s the picture I’ve currently got, and I’d be most grateful if you could tell me where it has gone wrong.
Let’s say you take in some video feed, and plug it into an AI. Then you can get a system that fires over a series of images, if some particular concept is instantiated within the video feed. Like there being a cat, or a moving cat, or so on. A QNR is just such a system, made up of a NN, representing a single concept. You can then compose these QNRs together to get other QNRs, like having the concepts of “dog and cat fighting” be composed of “dog”, “cat” and “X fighting Y” QNRs.
Which means we can build up a dictionary of QNRs, representing our propositions in some language, and our means of composing them forms our grammar, our syntax.
But what I don’t get is how you would do this automatically and manage to get a graph of QNRs that represent a Wikipedia article, unless you’re doing something like “find a composition of QNRs that fire up the most on this article” but I don’t know if that would be useful. Or how you’d use these things for training if what you’ve got is a bunch of disparate QNRs. Maybe you could stack the QNRs in a wide layer, and build NNs on top, so you automatically get a wide space of feature representations?
If QNRs produced by disparate format are used to train a neural net, then I’d guess they’d be translated to a common format first.
There would likely be multiple models of books, some generated with human guidance, and others generated to optimize a variety of predictions.
Maybe some models would be optimized for answering: what does the author believe about X?, as evaluated by a service that’s designed to evaluate those answers.
Some models might be constructed mostly by a system that takes info about the reputations of works that the book cites, and infers reliability estimates for each of the book’s claims, by aggregating the reliability of the citations that support each claim.
Possibly you’re confused because you’re imagining a more restrictive set of rules than Drexler intends for composing QNRs. He’s using rules that are much more general than what’s typically used for creating syntax trees of natural language. See section 8.1.2 for some hints. But I don’t see a clear answer to this kind of confusion.
Reading 8.1.2, this post, some of the rest of the paper and Drexler’s blog post helped in understanding QNRs. I think I see some of what he’s getting at, but if there’s a unified core to all this then I can’t crisply define it, or even generate enough examples in my head that I think they could be used to interpolate the rest of the space of possible QNRs.
Thank you for writing this. I’m afraid I still don’t understand QNRs. Here’s the picture I’ve currently got, and I’d be most grateful if you could tell me where it has gone wrong.
Let’s say you take in some video feed, and plug it into an AI. Then you can get a system that fires over a series of images, if some particular concept is instantiated within the video feed. Like there being a cat, or a moving cat, or so on. A QNR is just such a system, made up of a NN, representing a single concept. You can then compose these QNRs together to get other QNRs, like having the concepts of “dog and cat fighting” be composed of “dog”, “cat” and “X fighting Y” QNRs.
Which means we can build up a dictionary of QNRs, representing our propositions in some language, and our means of composing them forms our grammar, our syntax.
But what I don’t get is how you would do this automatically and manage to get a graph of QNRs that represent a Wikipedia article, unless you’re doing something like “find a composition of QNRs that fire up the most on this article” but I don’t know if that would be useful. Or how you’d use these things for training if what you’ve got is a bunch of disparate QNRs. Maybe you could stack the QNRs in a wide layer, and build NNs on top, so you automatically get a wide space of feature representations?
You’re partly on the right track.
If QNRs produced by disparate format are used to train a neural net, then I’d guess they’d be translated to a common format first.
There would likely be multiple models of books, some generated with human guidance, and others generated to optimize a variety of predictions.
Maybe some models would be optimized for answering: what does the author believe about X?, as evaluated by a service that’s designed to evaluate those answers.
Some models might be constructed mostly by a system that takes info about the reputations of works that the book cites, and infers reliability estimates for each of the book’s claims, by aggregating the reliability of the citations that support each claim.
Possibly you’re confused because you’re imagining a more restrictive set of rules than Drexler intends for composing QNRs. He’s using rules that are much more general than what’s typically used for creating syntax trees of natural language. See section 8.1.2 for some hints. But I don’t see a clear answer to this kind of confusion.
Reading 8.1.2, this post, some of the rest of the paper and Drexler’s blog post helped in understanding QNRs. I think I see some of what he’s getting at, but if there’s a unified core to all this then I can’t crisply define it, or even generate enough examples in my head that I think they could be used to interpolate the rest of the space of possible QNRs.