In your opinion, what might be some methods for discovering truth?
Observing, thinking, having ideas, and communicating with other people doing these things. Nothing surprising there. No-one has yet come up with a general algorithm for discovering new and interesting truths; if they did it would be an AGI.
Taking a wider view of this, it has been observed that every time some advance is made in the mathematics or technology of information processing, the new development is seized on as a model for how minds work, and since the invention of computers, a model for how minds might be made. The ancient Greeks compared it to a steam-driven machine. The Victorians compared it to a telephone exchange. Freud and his contemporaries drew on physics for their metaphors of psychic energies and forces. When computers were invented, it was a computer. Then holograms were invented and it was a hologram. Perceptrons fizzled because they couldn’t even compute an XOR, neural networks achieved Turing-completeness but no-one ever made a brain out of them, and logic programming is now just another programming style.
Bayesian inference is just the latest in that long line. It may be the one true way to reason about uncertainty, as predicate calculus is the one true way to reason about truth and falsity, but that does not make of it a universal algorithm for thinking.
Bayesian inference is just the latest in that long line. It may be the one true way to reason about uncertainty, as predicate calculus is the one true way to reason about truth and falsity, but that does not make of it a universal algorithm for thinking.
I didn’t get the impression that Bayesian inference itself was going to produce intelligence; the impression I have is that Bayesian inference is the best possible interface with reality. Attach a hypothesis-generating module to one end and a sensor module to the other and that thing will develop the correctest-possible hypotheses. We just don’t have any feasible hypothesis-generators.
I didn’t get the impression that Bayesian inference itself was going to produce intelligence
I do get that impression from people who blithely talk of “Bayesian superintelligences”. Example. What work is the word “Bayesian” doing there?
In this example, a Bayesian superintelligence is conceived as having a prior distribution over all possible hypotheses (for example, a complexity-based prior) and using its observations to optimally converge on the right one. You can even make a theoretically optimal learning algorithm that provably converges on the best hypothesis. (I forget the reference for this.) Where this falls down is the exponential explosion of hypothesis space with complexity. There no use in a perfect optimiser that takes longer than the age of the universe to do anything useful.
Thank you, that was very enlightening. I see now where you were coming from.
I still think that some breakthroughs are more -equal- fundamental and some methods are more correct, that is, efficient in seeking the truth. Perhaps attempts to first point out some specific interesting features of human consciousness (or intelligence, or brain) and only then try to analyse and replicate them would meet more success. In that sense logic and neural networks are successful, while bayesian inference is not.
I wonder if you are familiar with TRIZ? It strikes me as positively loony, but it is a not-outright-unsuccessful attempt at a general algorithm for discovering new, uh, counterintuitive implications of known natural laws. Not truths per se, but pretty close.
I’ve read a book on it, as it happens. It seemed quite a useful set of schemas for generating new ideas in industrial design, but of course not a complete algorithm.
I’ve peeked at your profile and the linked page. See, I’m currently enrolled into linguistics program, and I was considering dedicating some time to The Art of Prolog, so I’ve researched what Prolog software there is and wasn’t especially impressed. Could I maybe ask you for advice as to what kind of side project Prolog is suited for? I’m familiar with Lisp and C and I’ve dabbled with Haskell and Coq, and I would really really like to write something at least marginally useful.
I think Prolog, like Lisp, is mainly useful for being a different way of thinking about computation. The only practical industrial uses of Prolog I’ve ever heard of are some niche expert systems, a tool for exploring Unix systems for security vulnerabilities, and an implementation of part of the Universal Plug and Play protocol.
Observing, thinking, having ideas, and communicating with other people doing these things. Nothing surprising there. No-one has yet come up with a general algorithm for discovering new and interesting truths; if they did it would be an AGI.
Taking a wider view of this, it has been observed that every time some advance is made in the mathematics or technology of information processing, the new development is seized on as a model for how minds work, and since the invention of computers, a model for how minds might be made. The ancient Greeks compared it to a steam-driven machine. The Victorians compared it to a telephone exchange. Freud and his contemporaries drew on physics for their metaphors of psychic energies and forces. When computers were invented, it was a computer. Then holograms were invented and it was a hologram. Perceptrons fizzled because they couldn’t even compute an XOR, neural networks achieved Turing-completeness but no-one ever made a brain out of them, and logic programming is now just another programming style.
Bayesian inference is just the latest in that long line. It may be the one true way to reason about uncertainty, as predicate calculus is the one true way to reason about truth and falsity, but that does not make of it a universal algorithm for thinking.
I didn’t get the impression that Bayesian inference itself was going to produce intelligence; the impression I have is that Bayesian inference is the best possible interface with reality. Attach a hypothesis-generating module to one end and a sensor module to the other and that thing will develop the correctest-possible hypotheses. We just don’t have any feasible hypothesis-generators.
I do get that impression from people who blithely talk of “Bayesian superintelligences”. Example. What work is the word “Bayesian” doing there?
In this example, a Bayesian superintelligence is conceived as having a prior distribution over all possible hypotheses (for example, a complexity-based prior) and using its observations to optimally converge on the right one. You can even make a theoretically optimal learning algorithm that provably converges on the best hypothesis. (I forget the reference for this.) Where this falls down is the exponential explosion of hypothesis space with complexity. There no use in a perfect optimiser that takes longer than the age of the universe to do anything useful.
It would be a significant part of an AGI. Even the hardest part. But not enough to be considered an AGI itself.
Thank you, that was very enlightening. I see now where you were coming from.
I still think that some breakthroughs are more -equal- fundamental and some methods are more correct, that is, efficient in seeking the truth. Perhaps attempts to first point out some specific interesting features of human consciousness (or intelligence, or brain) and only then try to analyse and replicate them would meet more success. In that sense logic and neural networks are successful, while bayesian inference is not.
I wonder if you are familiar with TRIZ? It strikes me as positively loony, but it is a not-outright-unsuccessful attempt at a general algorithm for discovering new, uh, counterintuitive implications of known natural laws. Not truths per se, but pretty close.
double tildas mean strike-through
I’ve read a book on it, as it happens. It seemed quite a useful set of schemas for generating new ideas in industrial design, but of course not a complete algorithm.
I’ve peeked at your profile and the linked page. See, I’m currently enrolled into linguistics program, and I was considering dedicating some time to The Art of Prolog, so I’ve researched what Prolog software there is and wasn’t especially impressed. Could I maybe ask you for advice as to what kind of side project Prolog is suited for? I’m familiar with Lisp and C and I’ve dabbled with Haskell and Coq, and I would really really like to write something at least marginally useful.
I think Prolog, like Lisp, is mainly useful for being a different way of thinking about computation. The only practical industrial uses of Prolog I’ve ever heard of are some niche expert systems, a tool for exploring Unix systems for security vulnerabilities, and an implementation of part of the Universal Plug and Play protocol.