Yes. This shows you would need a better algorithm than brute-force search. But better algorithms are known to exist—to pick a trivial one, you can do random generation in a nontrivial programming language, with a grammar and a type system. This lets you rule out lots of ill-typed or ill-formed programs quickly.
A still less trivial example would generate new programs out of bits of existing programs. See the “macho” work at the University of Illinois for an example. They’re able to synthesize small programs (the size of ‘ls’) from the man-page description, plus a big body of example code.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
Generating all possible programs doesn’t solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined.
So what you say doesn’t refute my point at all.
Hrm? Suppose you’re trying to accomplish some problem X. There are range of algorithms and heuristics available to you. You try a few of them. At some point—usually a very quick point—one of them is good enough for your practical purpose, and you stop.
We don’t typically go too far in formalizing our purposes, generally. But I don’t see what the deep point is that you’re driving at. For practical purposes, algorithms are chosen by people in order to solve practical problems. Usually there are a few layers of formalized intermediaries—compilers and libraries and suchlike. But not very far down the regress, there’s a human. And humans settle for good enough. And they don’t have a formal model of how they do so.
There isn’t an infinite algorithmic regress. The particular process humans use to choose algorithms is unquestionably not a clean formal algorithm. Nobody ever said it was. The regress stops when you come to a human, who was never designed and isn’t an algorithm-choosing algorithm. But that doesn’t shed any light on whether a formal algorithm exists that could act similarly to a human, or whether there is an algorithm-choice procedure that’s as good or better than a human.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
This is fallacious.
Correct conclusion: you would then have a 1-step regress of picking algorithms.
You can even write an algorithm that goes through all possible algorithms (universal dovetailer). Yet this algorithm doesn’t solve any problem. So you still have to select an algorithm to solve a problem.
You can even write an algorithm that goes through all possible algorithms (universal dovetailer)
The universal dovetailer runs through all possible programs, which is a superset of all algorithms. You can’t use it to get access to just the genuine algorithms in any algorithmic fashion- if you could you could solve the Halting problem.
I agree. That’s why is say “this algorithm doesn’t solve any problem”, it isn’t in a problem solving algorithm in the sense I used in my post. Any “just go through all XYZ” doesn’t solve my stated problem, because it doesn’t select the actual useful solution.
That’s not the issue here. The issue is more subtle. Dovetaling doesn’t go through every algorithm but goes through every program. That is, it runs a program whether or not that program will halt.
it isn’t in a problem solving algorithm in the sense I used in my post
I’m not completely sure what you mean by a problem solving algorithm. Variations of universal dovetailing that are very close to it can be problem solving algorithms by most reasonable notions of the term. Consider for example the following:
Proposition: If P=NP then there is an explicitly constructable algorithm which gives explicit solutions to 3-SAT in polynomial time.
Proof sketch: For a given 3-SAT dovetail through all programs. Whenever a program gives an output, check if that output is a solution to the 3-SAT problem. If so, you’ve found your answer. It isn’t too hard to see that this process will terminate in polynomial time if P=NP.
(I’m brushing some issues here under the rug, like what we mean by explicitly constructable, and there’s an implicit assumption here of some form of w-consistency for our axiomatic system.)
This sort of construction works for a large variety of issues so that one can say that morally speaking if an algorithm exists to do so something in some complexity class then dovetailing will find an example of such an algorithm.
Any bitstring to a certain length N, can be produced.
Some of them are algorithms. Some can be proved with a rigor. Some can be tested statistically.
I don’t see your point as valid.
Producing and proving/testing all strings up to 1000 bits could be pretty expensive. And yet 1000 bits could be not enough for an intelligent AI.
Therefore, in this universe it is probably not possible to create an intelligent AI by simply enumerating and testing all bitstrings.
Yes. This shows you would need a better algorithm than brute-force search. But better algorithms are known to exist—to pick a trivial one, you can do random generation in a nontrivial programming language, with a grammar and a type system. This lets you rule out lots of ill-typed or ill-formed programs quickly.
A still less trivial example would generate new programs out of bits of existing programs. See the “macho” work at the University of Illinois for an example. They’re able to synthesize small programs (the size of ‘ls’) from the man-page description, plus a big body of example code.
Better algorithms maybe are known to exist, but these itself can’t ultimately be selected by an algorithm, as you would have an infinite regress of picking algorithms then.
Generating all possible programs doesn’t solve anything, since we still have to select a program that we actually use to solve a particular problem, also the algorithm to generate all possible algorithms itself cannot be algorithmically determined. So what you say doesn’t refute my point at all.
Hrm? Suppose you’re trying to accomplish some problem X. There are range of algorithms and heuristics available to you. You try a few of them. At some point—usually a very quick point—one of them is good enough for your practical purpose, and you stop.
We don’t typically go too far in formalizing our purposes, generally. But I don’t see what the deep point is that you’re driving at. For practical purposes, algorithms are chosen by people in order to solve practical problems. Usually there are a few layers of formalized intermediaries—compilers and libraries and suchlike. But not very far down the regress, there’s a human. And humans settle for good enough. And they don’t have a formal model of how they do so.
There isn’t an infinite algorithmic regress. The particular process humans use to choose algorithms is unquestionably not a clean formal algorithm. Nobody ever said it was. The regress stops when you come to a human, who was never designed and isn’t an algorithm-choosing algorithm. But that doesn’t shed any light on whether a formal algorithm exists that could act similarly to a human, or whether there is an algorithm-choice procedure that’s as good or better than a human.
This is fallacious.
Correct conclusion: you would then have a 1-step regress of picking algorithms.
Watch that slippery slope.
You can even write an algorithm that goes through all possible algorithms (universal dovetailer). Yet this algorithm doesn’t solve any problem. So you still have to select an algorithm to solve a problem.
The universal dovetailer runs through all possible programs, which is a superset of all algorithms. You can’t use it to get access to just the genuine algorithms in any algorithmic fashion- if you could you could solve the Halting problem.
I agree. That’s why is say “this algorithm doesn’t solve any problem”, it isn’t in a problem solving algorithm in the sense I used in my post. Any “just go through all XYZ” doesn’t solve my stated problem, because it doesn’t select the actual useful solution.
That’s not the issue here. The issue is more subtle. Dovetaling doesn’t go through every algorithm but goes through every program. That is, it runs a program whether or not that program will halt.
I’m not completely sure what you mean by a problem solving algorithm. Variations of universal dovetailing that are very close to it can be problem solving algorithms by most reasonable notions of the term. Consider for example the following:
Proposition: If P=NP then there is an explicitly constructable algorithm which gives explicit solutions to 3-SAT in polynomial time. Proof sketch: For a given 3-SAT dovetail through all programs. Whenever a program gives an output, check if that output is a solution to the 3-SAT problem. If so, you’ve found your answer. It isn’t too hard to see that this process will terminate in polynomial time if P=NP.
(I’m brushing some issues here under the rug, like what we mean by explicitly constructable, and there’s an implicit assumption here of some form of w-consistency for our axiomatic system.)
This sort of construction works for a large variety of issues so that one can say that morally speaking if an algorithm exists to do so something in some complexity class then dovetailing will find an example of such an algorithm.