All existing learning algorithms I know of, and I dare say all that exist, have at least an utility function, and also something that could be interpreted as a decision theory. Consider for example support vector machines, which explicitly try to maximize a margin (that would be the utility function), and any algorithm for computing SVMs can be interpreted as a decision theory. Similar considerations hold for neural networks, genetic algorithms, and even the minimax algorithm.
Thus, I strongly doubt that the notion of a learning algorithm with no utility function makes any sense.
Those are optimization criteria, but they are not decision algorithms in the sense that we usually talk about them in AI. A support vector machine is just finding the extrema of a cost function via its derivative, not planning a sequence of actions.
The most popular algorithm for SVMs does plan a sequence of actions, complete with heuristics as to which action to take. True, the “actions” are internal : they are changes to some data structure within the computer’s memory, rather than changes to the external world. But that is not so different from e.g. a chess AI, which assigns some heuristic score to chess positions and attempts to maximize it using a decision algorithm (to decide which move to make), even though the chessboard is just a data structure within the computer memory.
“Internal” to the “agent” is very different from having an external output to a computational system outside the “agent”. “Actions” that come from an extremely limited, non-Turing-complete “vocabulary” (really: programming language or computational calculus (those two are identical)) are also categorically different from a Turing complete calculus of possible actions.
The same distinction applies for hypothesis class that the learner can learn: if it’s not Turing complete (or some approximation thereof, like a total calculus with coinductive types and corecursive programs), then it is categorically not general learning or general decision-making.
This is why we all employ primitive classifiers every day without danger, and you need something like Solomonoff’s algorithmic probability in order to build AGI.
I agree, of course, that none of the examples I gave (“primitive classifiers”) are dangerous. Indeed, the “plans” they are capable of considering are too simple to pose any threat (they are, as you say, not Turing complete).
But, that doesn’t seem to relevant to the argument at all. You claimed
a very general learning algorithm with some debug output, but no actual decision-theory or utility function
whatsoever built in. That would be safe, since it has no capability or desire to do anything.
You claimed that a general learning algorithm without decision-theory or utility function is possible.
I pointed out that all (harmless) practical learning algorithms we know of do in fact have decision theories and utility functions.
What would “a learning algorithm without decision-theory or utility function, something that has no desire to do anything” even look like? Does the concept even make sense? Eliezer writes here
A string of zeroes down an output line to a motorized arm is just as much an output as any other output;
there is no privileged null, there is no such thing as ‘no action’ among all possible outputs.
To ‘do nothing’ is just another string of English words, that would be interpreted the same as
any other English words, with latitude.
You claimed that a general learning algorithm without decision-theory or utility function is possible. I pointed out that all (harmless) practical learning algorithms we know of do in fact have decision theories and utility functions.
/facepalm
There is in fact such a thing as a null output. There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner with such a primitive output as “in the class” or “not in the class” does not engage in world optimization, that is: its actions do not, to its own knowledge, skew any probability distribution over future states of any portion of the world outside itself.
It does not narrow the future.
Now, what we’ve been proposing as an Oracle is even less capable. It would truly have no outputs whatsoever, only input and a debug view. It would, by definition, be incapable of narrowing the future of anything, even its own internal states.
Perhaps I have misused terminology, but that is what I was referring to: inability to narrow the outer world’s future.
This thing you are proposing, an “oracle” that is incapable of modeling itself and incapable of modeling its environment (either would require turing-complete hypotheses), what could it possibly be useful for? What could it do that today’s narrow AI can’t?
You seem to have lost the thread of the conversation. The proposal was to build a learner that can model the environment using Turing-complete models, but which has no power to make decisions or take actions. This would be a Solomonoff Inducer approximation, not an AIXI approximation.
There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner
with such a primitive output as “in the class” or “not in the class” does not engage in
world optimization, that is: its actions do not, to its own knowledge,
skew any probability distribution over future states of any portion of the world outside itself.
…
Now, what we’ve been proposing as an Oracle is even less capable.
which led me to think you were talking about an oracle even less capable than a learner with a sub-Turing hypothesis class.
It would truly have no outputs whatsoever, only input and a debug view. It would, by definition, be
incapable of narrowing the future of anything, even its own internal states.
If the hypotheses it considers are turing-complete, then, given enough information (and someone would give it enough information, otherwise they couldn’t do anything useful with it), it could model itself, its environment, the relation between its internal states and what shows up on the debug view, and the reactions of its operators on the information they learn from that debug view. Its (internal) actions very much would, to its own knowledge, skew the probability distribution over future states of the outer world.
All existing learning algorithms I know of, and I dare say all that exist, have at least an utility function, and also something that could be interpreted as a decision theory. Consider for example support vector machines, which explicitly try to maximize a margin (that would be the utility function), and any algorithm for computing SVMs can be interpreted as a decision theory. Similar considerations hold for neural networks, genetic algorithms, and even the minimax algorithm.
Thus, I strongly doubt that the notion of a learning algorithm with no utility function makes any sense.
Those are optimization criteria, but they are not decision algorithms in the sense that we usually talk about them in AI. A support vector machine is just finding the extrema of a cost function via its derivative, not planning a sequence of actions.
The most popular algorithm for SVMs does plan a sequence of actions, complete with heuristics as to which action to take. True, the “actions” are internal : they are changes to some data structure within the computer’s memory, rather than changes to the external world. But that is not so different from e.g. a chess AI, which assigns some heuristic score to chess positions and attempts to maximize it using a decision algorithm (to decide which move to make), even though the chessboard is just a data structure within the computer memory.
“Internal” to the “agent” is very different from having an external output to a computational system outside the “agent”. “Actions” that come from an extremely limited, non-Turing-complete “vocabulary” (really: programming language or computational calculus (those two are identical)) are also categorically different from a Turing complete calculus of possible actions.
The same distinction applies for hypothesis class that the learner can learn: if it’s not Turing complete (or some approximation thereof, like a total calculus with coinductive types and corecursive programs), then it is categorically not general learning or general decision-making.
This is why we all employ primitive classifiers every day without danger, and you need something like Solomonoff’s algorithmic probability in order to build AGI.
I agree, of course, that none of the examples I gave (“primitive classifiers”) are dangerous. Indeed, the “plans” they are capable of considering are too simple to pose any threat (they are, as you say, not Turing complete).
But, that doesn’t seem to relevant to the argument at all. You claimed
You claimed that a general learning algorithm without decision-theory or utility function is possible. I pointed out that all (harmless) practical learning algorithms we know of do in fact have decision theories and utility functions. What would “a learning algorithm without decision-theory or utility function, something that has no desire to do anything” even look like? Does the concept even make sense? Eliezer writes here
/facepalm
There is in fact such a thing as a null output. There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner with such a primitive output as “in the class” or “not in the class” does not engage in world optimization, that is: its actions do not, to its own knowledge, skew any probability distribution over future states of any portion of the world outside itself.
It does not narrow the future.
Now, what we’ve been proposing as an Oracle is even less capable. It would truly have no outputs whatsoever, only input and a debug view. It would, by definition, be incapable of narrowing the future of anything, even its own internal states.
Perhaps I have misused terminology, but that is what I was referring to: inability to narrow the outer world’s future.
This thing you are proposing, an “oracle” that is incapable of modeling itself and incapable of modeling its environment (either would require turing-complete hypotheses), what could it possibly be useful for? What could it do that today’s narrow AI can’t?
A) It wasn’t my proposal.
B) The proposed software could model the outer environment, but not act on it.
Physics is turing-complete, so no, a learner that did not consider turing complete hypotheses could not model the outer environment.
You seem to have lost the thread of the conversation. The proposal was to build a learner that can model the environment using Turing-complete models, but which has no power to make decisions or take actions. This would be a Solomonoff Inducer approximation, not an AIXI approximation.
You said
which led me to think you were talking about an oracle even less capable than a learner with a sub-Turing hypothesis class.
If the hypotheses it considers are turing-complete, then, given enough information (and someone would give it enough information, otherwise they couldn’t do anything useful with it), it could model itself, its environment, the relation between its internal states and what shows up on the debug view, and the reactions of its operators on the information they learn from that debug view. Its (internal) actions very much would, to its own knowledge, skew the probability distribution over future states of the outer world.