All these games seem interesting, but more complicated than strictly necessary and plagued by the (possibly unavoidable) problem that guessing someone’s mind involves different skills/knowledge than guessing mindless laws of nature. Is there a game that captures induction more cleanly, so we can expect skill at that game to generalize better?
I have been, off and on, working on a Haskell implementation of Zendo. The idea is to implement just a subset: the human as player and the program as the Master (ie. the human trying to guess the rule).
The first question one naturally needs to know is: how do you generate rules? My attempt is to have a small set of building blocks which express simple propositions - ‘all’, ‘none’, ’even, ‘odd’, ‘ascendingBy’ etc. (and the numbers 1-10), and to generate a random list; that done, one can create random triplets of integers (via QuickCheck) and present the user only those triplets that satisfy the formula.
This solves your problem: the player can be told exactly what vocabulary the rule is written in. Another nice thing about having a simplified logic for propositions is that the formulas are data, but can be turned into code if need be, and it offers an obvious way to increase the difficulty: easy is as above; medium adds other predicates to the language (perhaps one could increase the numbers to 1-1000, and include predicates for ‘isPrime’/‘isComposite’); and so on.
(Before anyone gets too impressed, I don’t have any running code yet; I got bogged down into figuring out how to use GADTs to turn my data constructors into code. And if you’re wondering why no plans to have the computer guess the human’s rule—that’s because it’s a hard problem. It’s much easier to generate a random rule and then run triplets past it then it is to generate rules about a set bunch of triplets.)
Release early and release often. I can’t wait to try your app. I’ve been wanting to play zendo for a long time, but wasn’t willing to pay for all the sets of pieces that it seems are required now that zendo is out of print.
In terms of generating rules, you might want to look at CopyCat and its algorithms for analogy solving. There is an excellent discussion of the strategies it uses in the author’s recent Complexity: A Guided Tour. You’ll certainly get lots of good ideas for rule generation by looking at the discussion in that book.
As an aside, given all the GEB love around here, it’s worth noting that CopyCat was originally developed by Douglas Hofstadter and a student of his, the latter being the author of the book linked.
Eliezer has mentioned CopyCat many times, so I figured that Mitchell and the relation to the FARGonauts was redundant information. On the topic of Mitchell and books, I also recommend An Introduction to Genetic Algorithms.
You’re right, I stand corrected. I could have sworn I remembered him mentioning CopyCat in OB before, but I can’t find any now, only in other essays like “General Intelligence and Seed AI” and “The Plan to Singularity”.
“Since the 1995 FARG book, work on Copycat-like models has continued: as of 2008 the latest models are Phaeaco (a Bongard problem solver), SeqSee (number sequence extrapolation), George (geometric exploration), and Musicat (a melodic expectation model).”
3D Zendo is basically a variant on Bongard problems, and if a program can extrapolate numbers, then it could also test them against the human oracle to see if it’s right.
You just pointed it out. The difference between “mind” and “mindless”. If a human is guessing, there are different techniques for determining things thought up by another human than things not thought up at all.
So it’s not really about the laws themselves (being “mindless” or “mind”) as it’s the context in which the guessing/researching is done. Guessing a a natural law known by a person in front of you is different than discovering it anew by yourself.
Or the chess variant Penultima.
All these games seem interesting, but more complicated than strictly necessary and plagued by the (possibly unavoidable) problem that guessing someone’s mind involves different skills/knowledge than guessing mindless laws of nature. Is there a game that captures induction more cleanly, so we can expect skill at that game to generalize better?
I have been, off and on, working on a Haskell implementation of Zendo. The idea is to implement just a subset: the human as player and the program as the Master (ie. the human trying to guess the rule).
The first question one naturally needs to know is: how do you generate rules? My attempt is to have a small set of building blocks which express simple propositions - ‘all’, ‘none’, ’even, ‘odd’, ‘ascendingBy’ etc. (and the numbers 1-10), and to generate a random list; that done, one can create random triplets of integers (via QuickCheck) and present the user only those triplets that satisfy the formula.
This solves your problem: the player can be told exactly what vocabulary the rule is written in. Another nice thing about having a simplified logic for propositions is that the formulas are data, but can be turned into code if need be, and it offers an obvious way to increase the difficulty: easy is as above; medium adds other predicates to the language (perhaps one could increase the numbers to 1-1000, and include predicates for ‘isPrime’/‘isComposite’); and so on.
(Before anyone gets too impressed, I don’t have any running code yet; I got bogged down into figuring out how to use GADTs to turn my data constructors into code. And if you’re wondering why no plans to have the computer guess the human’s rule—that’s because it’s a hard problem. It’s much easier to generate a random rule and then run triplets past it then it is to generate rules about a set bunch of triplets.)
Release early and release often. I can’t wait to try your app. I’ve been wanting to play zendo for a long time, but wasn’t willing to pay for all the sets of pieces that it seems are required now that zendo is out of print.
In terms of generating rules, you might want to look at CopyCat and its algorithms for analogy solving. There is an excellent discussion of the strategies it uses in the author’s recent Complexity: A Guided Tour. You’ll certainly get lots of good ideas for rule generation by looking at the discussion in that book.
As an aside, given all the GEB love around here, it’s worth noting that CopyCat was originally developed by Douglas Hofstadter and a student of his, the latter being the author of the book linked.
Eliezer has mentioned CopyCat many times, so I figured that Mitchell and the relation to the FARGonauts was redundant information. On the topic of Mitchell and books, I also recommend An Introduction to Genetic Algorithms.
I don’t recall seeing CopyCat mentioned on OB. Has he mentioned it elsewhere, perhaps?
Not everyone here is familiar with Eliezer’s stuff outside OB/LW and an awareness that SIAI is his “day job”.
You’re right, I stand corrected. I could have sworn I remembered him mentioning CopyCat in OB before, but I can’t find any now, only in other essays like “General Intelligence and Seed AI” and “The Plan to Singularity”.
CopyCat does look interesting. I note that:
3D Zendo is basically a variant on Bongard problems, and if a program can extrapolate numbers, then it could also test them against the human oracle to see if it’s right.
What’s the difference between one’s mind laws and mindless “natural” laws?
You just pointed it out. The difference between “mind” and “mindless”. If a human is guessing, there are different techniques for determining things thought up by another human than things not thought up at all.
(caveat: anthropic argument)
So it’s not really about the laws themselves (being “mindless” or “mind”) as it’s the context in which the guessing/researching is done. Guessing a a natural law known by a person in front of you is different than discovering it anew by yourself.