I find the ideas you discuss interesting, but they leave me with more questions. I agree that we are moving toward a more generic AI that we can use for all kinds of tasks.
I have trouble understanding the goal-completeness concept. I’d reiterate @Razied ’s point. You mention “steers the future very slowly”, so there is an implicit concept of “speed of steering”. I don’t find the turing machine analogy helpful in infering an analogous conclusion because I don’t know what that conclusion is.
You’re making a qualitative distinction between humans (goal-complete) and other animals (non-goal complete) agents. I don’t understand what you mean by that distinction. I find the idea of goal completeness interesting to explore but quite fuzzy at this point.
Unlike the other animals, humans can represent any goal in a large domain like the physical universe, and then in a large fraction of cases, they can think of useful things to steer the universe toward that goal to an appreciable degree.
Some goals are more difficult than others / require giving the human control over more resources than others, and measurements of optimization power are hard to define, but this definition is taking a step toward formalizing the claim that humans are more of a “general intelligence” than animals. Presumably you agree with this claim?
It seems the crux of our disagreement factors down to a disagreement about whether this Optimization Power post by Eliezer is pointing at a sufficiently coherent concept.
I find the ideas you discuss interesting, but they leave me with more questions. I agree that we are moving toward a more generic AI that we can use for all kinds of tasks.
I have trouble understanding the goal-completeness concept. I’d reiterate @Razied ’s point. You mention “steers the future very slowly”, so there is an implicit concept of “speed of steering”. I don’t find the turing machine analogy helpful in infering an analogous conclusion because I don’t know what that conclusion is.
You’re making a qualitative distinction between humans (goal-complete) and other animals (non-goal complete) agents. I don’t understand what you mean by that distinction. I find the idea of goal completeness interesting to explore but quite fuzzy at this point.
Unlike the other animals, humans can represent any goal in a large domain like the physical universe, and then in a large fraction of cases, they can think of useful things to steer the universe toward that goal to an appreciable degree.
Some goals are more difficult than others / require giving the human control over more resources than others, and measurements of optimization power are hard to define, but this definition is taking a step toward formalizing the claim that humans are more of a “general intelligence” than animals. Presumably you agree with this claim?
It seems the crux of our disagreement factors down to a disagreement about whether this Optimization Power post by Eliezer is pointing at a sufficiently coherent concept.