Yes. This shows you would need a better algorithm than brute-force search. But better algorithms are known to exist—to pick a trivial one, you can do random generation in a nontrivial programming language, with a grammar and a type system. This lets you rule out lots of ill-typed or ill-formed programs quickly.
A still less trivial example would generate new programs out of bits of existing programs. See the “macho” work at the University of Illinois for an example. They’re able to synthesize small programs (the size of ‘ls’) from the man-page description, plus a big body of example code.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
Generating all possible programs doesn’t solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined.
So what you say doesn’t refute my point at all.
Hrm? Suppose you’re trying to accomplish some problem X. There are range of algorithms and heuristics available to you. You try a few of them. At some point—usually a very quick point—one of them is good enough for your practical purpose, and you stop.
We don’t typically go too far in formalizing our purposes, generally. But I don’t see what the deep point is that you’re driving at. For practical purposes, algorithms are chosen by people in order to solve practical problems. Usually there are a few layers of formalized intermediaries—compilers and libraries and suchlike. But not very far down the regress, there’s a human. And humans settle for good enough. And they don’t have a formal model of how they do so.
There isn’t an infinite algorithmic regress. The particular process humans use to choose algorithms is unquestionably not a clean formal algorithm. Nobody ever said it was. The regress stops when you come to a human, who was never designed and isn’t an algorithm-choosing algorithm. But that doesn’t shed any light on whether a formal algorithm exists that could act similarly to a human, or whether there is an algorithm-choice procedure that’s as good or better than a human.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
This is fallacious.
Correct conclusion: you would then have a 1-step regress of picking algorithms.
Producing and proving/testing all strings up to 1000 bits could be pretty expensive. And yet 1000 bits could be not enough for an intelligent AI.
Therefore, in this universe it is probably not possible to create an intelligent AI by simply enumerating and testing all bitstrings.
Yes. This shows you would need a better algorithm than brute-force search. But better algorithms are known to exist—to pick a trivial one, you can do random generation in a nontrivial programming language, with a grammar and a type system. This lets you rule out lots of ill-typed or ill-formed programs quickly.
A still less trivial example would generate new programs out of bits of existing programs. See the “macho” work at the University of Illinois for an example. They’re able to synthesize small programs (the size of ‘ls’) from the man-page description, plus a big body of example code.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
Generating all possible programs doesn’t solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined. So what you say doesn’t refute my point at all.
Hrm? Suppose you’re trying to accomplish some problem X. There are range of algorithms and heuristics available to you. You try a few of them. At some point—usually a very quick point—one of them is good enough for your practical purpose, and you stop.
We don’t typically go too far in formalizing our purposes, generally. But I don’t see what the deep point is that you’re driving at. For practical purposes, algorithms are chosen by people in order to solve practical problems. Usually there are a few layers of formalized intermediaries—compilers and libraries and suchlike. But not very far down the regress, there’s a human. And humans settle for good enough. And they don’t have a formal model of how they do so.
There isn’t an infinite algorithmic regress. The particular process humans use to choose algorithms is unquestionably not a clean formal algorithm. Nobody ever said it was. The regress stops when you come to a human, who was never designed and isn’t an algorithm-choosing algorithm. But that doesn’t shed any light on whether a formal algorithm exists that could act similarly to a human, or whether there is an algorithm-choice procedure that’s as good or better than a human.
This is fallacious.
Correct conclusion: you would then have a 1-step regress of picking algorithms.
Watch that slippery slope.