Modern “AI” research programs tend to develop relatively simple “training wheels” tasks with objectively measurable and reproducible performance. But you have at least the trappings of science. The same can’t be said for most early AI work.
If there really isn’t enough information in his papers to reproduce his result (I have not read them), then Lenat has to at least be suspected of painting an overly rosy picture of how awesome his creation was.
If the result is just “this is cool”, then a public binary, web service, or source code release would be welcome.
Modern “AI” research programs tend to develop relatively simple “training wheels” tasks with objectively measurable and reproducible performance. But you have at least the trappings of science. The same can’t be said for most early AI work.
If there really isn’t enough information in his papers to reproduce his result (I have not read them), then Lenat has to at least be suspected of painting an overly rosy picture of how awesome his creation was.
If the result is just “this is cool”, then a public binary, web service, or source code release would be welcome.