I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford—controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling techniques to handle the ill-defined aspects of the domain.
Notably in both the cars and the helicopters, a lot of the domain definition is done implicitly, by learning from expert humans (drivers or stunt pilots). The resulting representation of domain models is explicit but messy. However it is subject to investigation, refinement, etc. as needed to make it work well enough to handle the target domain.
Both of these examples use Bayesian semantics, but go well beyond cookbook Bayesian approaches, and use control theory, some fairly fancy model acquisition techniques, etc.
There is a lot of relevant tech out there if Less Wrong is really serious about its mission. I haven’t seen much attempt to pursue it yet.
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford—controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling techniques to handle the ill-defined aspects of the domain.
Notably in both the cars and the helicopters, a lot of the domain definition is done implicitly, by learning from expert humans (drivers or stunt pilots). The resulting representation of domain models is explicit but messy. However it is subject to investigation, refinement, etc. as needed to make it work well enough to handle the target domain.
Both of these examples use Bayesian semantics, but go well beyond cookbook Bayesian approaches, and use control theory, some fairly fancy model acquisition techniques, etc.
There is a lot of relevant tech out there if Less Wrong is really serious about its mission. I haven’t seen much attempt to pursue it yet.