I think it would be nice to write explicitly that here the AI systems are learned
Good point, fixed.
coherent goals (in the sense of goals that don’t change) is a very dangerous case, but I’m not convinced that goal-directed agents must have one goal forever.
We should categorise things as goal-directed agents if it scores highly on most of these criteria, not just if it scores perfectly on all of them. So I agree that you don’t need one goal forever, but you do need it for more than a few minutes. And internal unification also means that the whole system is working towards this.
examples of lacking 2, I feel like the ones you’re giving could be goal-directed
Same here: lacking this doesn’t guarantee a lack of goal-directedness, but it’s one contributing factor. As another example, we might say that humans often plan in a restricted way: only do things that you’ve seen other people do before. And this definitely makes us less goal-directed.
it’s not clear what sensitive means for an objective. Do you mean that the objective values more long-term plans? That it doesn’t discount with length of plans? Or instead something more like the expanding moral circle, where the AI has an objective that treats equally near-future and far-future, and near and far things?
By “sensitive” I merely mean that differences in expected long-term or large-scale outcomes sometimes lead to differences in current choices.
The change with mesa-optimizers (or simply optimizers) is that you treat separately the ingredients of optimization, but it still has the same problem of needing an objective it can use
Yeah, I think there’s still much more to be done to make this clearer. I guess my criticism of mesa-optimisers was that they talked about explicit representation of the objective function (whatever that means). Whereas I think my definition relies more on the values of choices being represented. Idk how much of an improvement this is.
Your definition of goals looks like a more constrained utility functions, defined on equivalence classes of states/outcomes as abstracted by the agent’s internal concepts. Is it correct?
I don’t really know what it means for something to be a utility function. I assume you could interpret it that way, but my definition of goals also includes deontological goals, which would make that interpretation harder. I like the “equivalence classes” thing more, but I’m not confident enough about the space of all possible internal concepts to claim that it’s always a good fit.
do you have an idea of what specific properties such utility functions could have as a consequence
I expect that asking “what properties do these utility functions have” will be generally more misleading than asking “what properties do these goals have”, because the former gives you an illusion of mathematical transparency. My tentative answer to the latter question is that, due to Moravec’s paradox, they will have the properties of high-level human thought more than they have the properties of low-level human thought. But I’m still pretty confused about this.
We should categorise things as goal-directed agents if it scores highly on most of these criteria, not just if it scores perfectly on all of them. So I agree that you don’t need one goal forever, but you do need it for more than a few minutes. And internal unification also means that the whole system is working towards this.
If coherence is about having the same goal for a “long enough” period of time, then it makes sense to me.
By “sensitive” I merely mean that differences in expected long-term or large-scale outcomes sometimes lead to differences in current choices.
So the think that judges outcomes in the goal-directed agent is “not always privileging short-term outcomes”? Then I guess it’s also a scale, because there’s a big difference between a system that has one case where it privileges long-term outcomes over short-term ones, and a system that focuses on long-term outcomes.
Yeah, I think there’s still much more to be done to make this clearer. I guess my criticism of mesa-optimisers was that they talked about explicit representation of the objective function (whatever that means). Whereas I think my definition relies more on the values of choices being represented. Idk how much of an improvement this is.
I agree that the explicit representation of the objective is weird. But on the other hand, it’s an explicit and obvious weirdness, that either calls for clarification or changes. Whereas in your criteria, I feel that essentially the same idea is made implicit/less weird, without actually bringing a better solution. Your approach might be better in the long run, possible because rephrasing the question in these terms lets us find a non weird way to define this objective.
I just wanted to point out that in our current state of knowledge, I feel like there are drawbacks in “hiding” the weirdness like you do.
I don’t really know what it means for something to be a utility function. I assume you could interpret it that way, but my definition of goals also includes deontological goals, which would make that interpretation harder. I like the “equivalence classes” thing more, but I’m not confident enough about the space of all possible internal concepts to claim that it’s always a good fit.
One idea I had for defining goals is as a temporal logic property (for example in LTL) on states. That lets you express things like “I want to reach one of these states” or “I never want to reach this state”; the latter looks like a deontological proprety to me. Thinking some more about this led me see two issues:
First, it doesn’t let you encode preferences of some state over another. That might be solvable by adding an partial order with nice properties, like Stuart Armstrong’s partial preferences.
Second, the system doesn’t have access to the states of the world, it has access to its abstractions of those states. Here we go back to the equivalence classes idea. Maybe a way to cash in your internal abstractions and Paul’s ascriptions of beliefs is through an equivalence relation on the states of the world, such that the goal of the system is defined on the equivalence classes for this relation.
I expect that asking “what properties do these utility functions have” will be generally more misleading than asking “what properties do these goals have”, because the former gives you an illusion of mathematical transparency. My tentative answer to the latter question is that, due to Moravec’s paradox, they will have the properties of high-level human thought more than they have the properties of low-level human thought. But I’m still pretty confused about this.
Agreed that the first step should be the properties of goals. I just also believe that if you get some nice properties of goals, you might know what constraints to add to utility functions to make them more “goal-like”.
Your last sentence seems contradictory with what you wrote about Dennett. Like I understand it as you saying “goals would be like high level human goals”, while your criticism of Dennett was that the intentional stance doesn’t necessarily works on NNs because they don’t have to have the same kind of goals than us. Am I wrong about one of those opinions?
Thanks for the comments!
Good point, fixed.
We should categorise things as goal-directed agents if it scores highly on most of these criteria, not just if it scores perfectly on all of them. So I agree that you don’t need one goal forever, but you do need it for more than a few minutes. And internal unification also means that the whole system is working towards this.
Same here: lacking this doesn’t guarantee a lack of goal-directedness, but it’s one contributing factor. As another example, we might say that humans often plan in a restricted way: only do things that you’ve seen other people do before. And this definitely makes us less goal-directed.
By “sensitive” I merely mean that differences in expected long-term or large-scale outcomes sometimes lead to differences in current choices.
Yeah, I think there’s still much more to be done to make this clearer. I guess my criticism of mesa-optimisers was that they talked about explicit representation of the objective function (whatever that means). Whereas I think my definition relies more on the values of choices being represented. Idk how much of an improvement this is.
I don’t really know what it means for something to be a utility function. I assume you could interpret it that way, but my definition of goals also includes deontological goals, which would make that interpretation harder. I like the “equivalence classes” thing more, but I’m not confident enough about the space of all possible internal concepts to claim that it’s always a good fit.
I expect that asking “what properties do these utility functions have” will be generally more misleading than asking “what properties do these goals have”, because the former gives you an illusion of mathematical transparency. My tentative answer to the latter question is that, due to Moravec’s paradox, they will have the properties of high-level human thought more than they have the properties of low-level human thought. But I’m still pretty confused about this.
Thanks for the answers!
If coherence is about having the same goal for a “long enough” period of time, then it makes sense to me.
So the think that judges outcomes in the goal-directed agent is “not always privileging short-term outcomes”? Then I guess it’s also a scale, because there’s a big difference between a system that has one case where it privileges long-term outcomes over short-term ones, and a system that focuses on long-term outcomes.
I agree that the explicit representation of the objective is weird. But on the other hand, it’s an explicit and obvious weirdness, that either calls for clarification or changes. Whereas in your criteria, I feel that essentially the same idea is made implicit/less weird, without actually bringing a better solution. Your approach might be better in the long run, possible because rephrasing the question in these terms lets us find a non weird way to define this objective.
I just wanted to point out that in our current state of knowledge, I feel like there are drawbacks in “hiding” the weirdness like you do.
One idea I had for defining goals is as a temporal logic property (for example in LTL) on states. That lets you express things like “I want to reach one of these states” or “I never want to reach this state”; the latter looks like a deontological proprety to me. Thinking some more about this led me see two issues:
First, it doesn’t let you encode preferences of some state over another. That might be solvable by adding an partial order with nice properties, like Stuart Armstrong’s partial preferences.
Second, the system doesn’t have access to the states of the world, it has access to its abstractions of those states. Here we go back to the equivalence classes idea. Maybe a way to cash in your internal abstractions and Paul’s ascriptions of beliefs is through an equivalence relation on the states of the world, such that the goal of the system is defined on the equivalence classes for this relation.
Agreed that the first step should be the properties of goals. I just also believe that if you get some nice properties of goals, you might know what constraints to add to utility functions to make them more “goal-like”.
Your last sentence seems contradictory with what you wrote about Dennett. Like I understand it as you saying “goals would be like high level human goals”, while your criticism of Dennett was that the intentional stance doesn’t necessarily works on NNs because they don’t have to have the same kind of goals than us. Am I wrong about one of those opinions?