Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, “Low density parity check codes”, IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.
I recognized that Gallager’s method was the same as Pearl’s belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes″, Electronics Letters, vol. 33, pp. 457-458).
This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it’s not really AGI).
Came here to say this, got beaten to it by Radford Neal himself, wow! Well, I’m gonna comment anyway, even though it’s mostly been said.
Gallagher proposed belief propagation as an approximate good-enough method of decoding a certain error-correcting code, but didn’t notice that it worked on all sorts of probability problems. Pearl proposed it as a general mechanism for dealing with probability problems, but wanted perfect mathematical correctness, so confined himself to tree-shaped problems. It was their common generalization that was the real breakthrough: an approximate good-enough solution to all sorts of problems. Which is what Pearl eventually noticed, so props to him.
If we’d had AGI in the 1960s, someone with a probability problem could have said “Here’s my problem. For every paper in the literature, spawn an instance to read that paper and tell me if it has any help for my problem.” It would have found Gallagher’s paper and said “Maybe you could use this?”
Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, “Low density parity check codes”, IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.
I recognized that Gallager’s method was the same as Pearl’s belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes″, Electronics Letters, vol. 33, pp. 457-458).
This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it’s not really AGI).
Came here to say this, got beaten to it by Radford Neal himself, wow! Well, I’m gonna comment anyway, even though it’s mostly been said.
Gallagher proposed belief propagation as an approximate good-enough method of decoding a certain error-correcting code, but didn’t notice that it worked on all sorts of probability problems. Pearl proposed it as a general mechanism for dealing with probability problems, but wanted perfect mathematical correctness, so confined himself to tree-shaped problems. It was their common generalization that was the real breakthrough: an approximate good-enough solution to all sorts of problems. Which is what Pearl eventually noticed, so props to him.
If we’d had AGI in the 1960s, someone with a probability problem could have said “Here’s my problem. For every paper in the literature, spawn an instance to read that paper and tell me if it has any help for my problem.” It would have found Gallagher’s paper and said “Maybe you could use this?”