I was specifically talking about the conclusion that we shouldn’t talk about objectives/goals.
Yeah, sorry, I ninja-edited my comment before you replied because I realized I misunderstood you.
Tbc I think there are times when people say “Alice is clearly trying to do X” and my response is “what do you predict Alice would do in future situation Y” and it is not in fact X, so I do think it is not crazy to say that even for humans you should focus more on predictions of behavior and the reasons for making those predictions. But I agree you wouldn’t want to not talk about objectives / goals entirely.
Or would you say “We don’t have past experience of goal-talk being useful for understanding these creatures, and also we shouldn’t expect introspection to work well for predicting them either, therefore let’s avoid trying to say that these aliens/octopi have goals/intentions/objectives/etc, and instead talk directly about generalization behavior in novel situations.”
Yup!
Though in the octopus case you could have lots of empirical experience, just as we likely will have lots of empirical experience with future AI systems (in the future).
I do think it’s quite plausible that in these settings we’ll say “well they’ve done X, we know nothing else about them, so probably we should predict they’ll continue to do X”, which looks pretty similar to saying they have a goal of X. I think the main difference is that I’d be way more uncertain about that than it sounds like you would be.
Yeah, sorry, I ninja-edited my comment before you replied because I realized I misunderstood you.
Tbc I think there are times when people say “Alice is clearly trying to do X” and my response is “what do you predict Alice would do in future situation Y” and it is not in fact X, so I do think it is not crazy to say that even for humans you should focus more on predictions of behavior and the reasons for making those predictions. But I agree you wouldn’t want to not talk about objectives / goals entirely.
Yup!
Though in the octopus case you could have lots of empirical experience, just as we likely will have lots of empirical experience with future AI systems (in the future).
I do think it’s quite plausible that in these settings we’ll say “well they’ve done X, we know nothing else about them, so probably we should predict they’ll continue to do X”, which looks pretty similar to saying they have a goal of X. I think the main difference is that I’d be way more uncertain about that than it sounds like you would be.