Why can’t AI researchers formulate and test theories the way high-energy physicists do?
But surely they do? Every proposal for a way of doing AI (I’m reading this as AGI here) is a hypothesis about how an AI could be created, and the proposers’ failure to create an AI is the refutation of that hypothesis. Science as normal. Talk of physics envy is just an excuse for failure. The problem with excusing failure is that it leaves you with failure, when the task is to succeed.
Every proposal for turning lead into gold is a hypothesis about how lead could be turned into gold, but this doesn’t make alchemy science. Good science progresses through small problems conclusively solved, building on each other, not by trying and repeatedly failing to reach the grand goal.
I dunno. I feel like there should be a symmetry between positive results and negative results, like well-designed but failed experiments shouldn’t lose science points just because they failed.
While I wouldn’t go so far as to say that huge number of grand designs with negative results are not science, it seems to me like they are trying to brute force the solution.
Every negative in a brute force attack only eliminates one key, and doesn’t give much information as negatives are far more numerous than positives. It is not a very efficient way to search the space, and we should try to do a lot better if we can. It is the method of last resort.
But surely they do? Every proposal for a way of doing AI (I’m reading this as AGI here) is a hypothesis about how an AI could be created, and the proposers’ failure to create an AI is the refutation of that hypothesis. Science as normal. Talk of physics envy is just an excuse for failure. The problem with excusing failure is that it leaves you with failure, when the task is to succeed.
Every proposal for turning lead into gold is a hypothesis about how lead could be turned into gold, but this doesn’t make alchemy science. Good science progresses through small problems conclusively solved, building on each other, not by trying and repeatedly failing to reach the grand goal.
I dunno. I feel like there should be a symmetry between positive results and negative results, like well-designed but failed experiments shouldn’t lose science points just because they failed.
While I wouldn’t go so far as to say that huge number of grand designs with negative results are not science, it seems to me like they are trying to brute force the solution.
Every negative in a brute force attack only eliminates one key, and doesn’t give much information as negatives are far more numerous than positives. It is not a very efficient way to search the space, and we should try to do a lot better if we can. It is the method of last resort.