Glad they helped! That’s the first time I use this feature, and we debated whether to add more or remove them completely, so thanks for the feedback. :)
I think depending on what position you take, there are difference in how much one thinks there’s “room for a lot of work in this sphere.” The more you treat goal-directedness as important because it’s a useful category in our map for predicting certain systems, the less important it is to be precise about it. On the other hand if you want to treat goal-directedness in a human-independent way or otherwise care about it “for its own sake” for some reason, then it’s a different story.
If I get you correctly, you’re arguing that there’s less work on goal-directedness if we try to use it concretely (for discussing AI risk), compared to if we study it for it’s own sake? I think I agree with that, but I still believe that we need a pretty concrete definition to use goal-directedness in practice, and that we’re far from there. There is less pressure to deal ith all the philosophical nitpicks, but we should at least get the big intuitions (of the type mentioned in this lit review) right, or explain why they’re wrong.
Glad they helped! That’s the first time I use this feature, and we debated whether to add more or remove them completely, so thanks for the feedback. :)
If I get you correctly, you’re arguing that there’s less work on goal-directedness if we try to use it concretely (for discussing AI risk), compared to if we study it for it’s own sake? I think I agree with that, but I still believe that we need a pretty concrete definition to use goal-directedness in practice, and that we’re far from there. There is less pressure to deal ith all the philosophical nitpicks, but we should at least get the big intuitions (of the type mentioned in this lit review) right, or explain why they’re wrong.