if you really had a method for say, hypothesis generation, this would actually imply logical omniscience, and would basically allow us to create full AGI, RIGHT NOW.
This is correct. Arguments against Bayesianism ultimately boil down to “it’s not enough for AGI”. And they are stupid, because nobody has ever said that it was. But then arguments in favor of Bayesianism boil down to “it’s True”. And they are stupid, because “True” is not quite the same as “useful”. I think this whole debate is pointless as there is very little the two sides disagree with, besides some wordings.
Having said that, I think the question “how to reason well” should be seen as equivalent to “how to build an AGI”, which probably places me on the anit-Bayesian side.
Having said that, I think the question “how to reason well” should be seen as equivalent to “how to build an AGI”, which probably places me on the anit-Bayesian side.
There are some equivalences, but there are also some differences. We already know some uncomputable methods that are optimal between all computable inference, and they are perfectly Bayesian. But when it comes down to computable optimality, we are in high waters.
This is correct. Arguments against Bayesianism ultimately boil down to “it’s not enough for AGI”. And they are stupid, because nobody has ever said that it was. But then arguments in favor of Bayesianism boil down to “it’s True”. And they are stupid, because “True” is not quite the same as “useful”. I think this whole debate is pointless as there is very little the two sides disagree with, besides some wordings.
Having said that, I think the question “how to reason well” should be seen as equivalent to “how to build an AGI”, which probably places me on the anit-Bayesian side.
There are some equivalences, but there are also some differences. We already know some uncomputable methods that are optimal between all computable inference, and they are perfectly Bayesian.
But when it comes down to computable optimality, we are in high waters.