As for the last line, I find That Alien Message pretty persuasive. We have plenty of data that we just lack the capacity to process efficiently, either because of bias (i.e. estimating the real-world effectiveness of political policies always turns into a game of selective reporting and analogy) or because of our limited hardware (i.e. protein folding; we have the sequences and the basic interactions, but we can’t currently see the patterns in the data even well enough to pawn off the real work to our computers at any level beyond brute-force). A Bayesian AI with superior hardware would have more than enough data already to crack these basic problems.
Is protein folding a basic problem? People have suggested it is akin to being NP complete.
With regards to political policies, how will the AGI know which of the political data is good data for basing political planning. Also I don’t think we have enough data for making policy to improve what really matters, We probably have enough make policies to improve GDP or somesuch, but enough to make sure that father’s can spend enough time with their kids to provide a decent role model, I’m sceptical.
Is protein folding a basic problem? People have suggested it is akin to being NP complete.
I’m not qualified to judge on protein folding; but it seems extraordinarily likely that among all of the problems that currently appear too difficult for us, some of them can be solved quickly with better Bayesian processing of the mounds of data we currently have.
The great advances we’ve made, we’ve made by narrowing down the search space of hypothesis in an intuitive (quasi-Bayesian) manner; but certain things, like computer code, contain patterns our intuition isn’t optimized to grok. (That’s why programming languages help so much by letting us apply the verbal intuition we already have; but even so, we lose track of what goes where after only a very few steps and recursions.) That very general block to optimization is why I suspect a better Bayesian could vastly outperform the best current intuitive approaches.
As for the last line, I find That Alien Message pretty persuasive. We have plenty of data that we just lack the capacity to process efficiently, either because of bias (i.e. estimating the real-world effectiveness of political policies always turns into a game of selective reporting and analogy) or because of our limited hardware (i.e. protein folding; we have the sequences and the basic interactions, but we can’t currently see the patterns in the data even well enough to pawn off the real work to our computers at any level beyond brute-force). A Bayesian AI with superior hardware would have more than enough data already to crack these basic problems.
Is protein folding a basic problem? People have suggested it is akin to being NP complete.
With regards to political policies, how will the AGI know which of the political data is good data for basing political planning. Also I don’t think we have enough data for making policy to improve what really matters, We probably have enough make policies to improve GDP or somesuch, but enough to make sure that father’s can spend enough time with their kids to provide a decent role model, I’m sceptical.
I’m not qualified to judge on protein folding; but it seems extraordinarily likely that among all of the problems that currently appear too difficult for us, some of them can be solved quickly with better Bayesian processing of the mounds of data we currently have.
The great advances we’ve made, we’ve made by narrowing down the search space of hypothesis in an intuitive (quasi-Bayesian) manner; but certain things, like computer code, contain patterns our intuition isn’t optimized to grok. (That’s why programming languages help so much by letting us apply the verbal intuition we already have; but even so, we lose track of what goes where after only a very few steps and recursions.) That very general block to optimization is why I suspect a better Bayesian could vastly outperform the best current intuitive approaches.