I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)
It also seems natural to think of ‘weakly has goals’ as something other than ‘goal directed’, and ‘goal directed’ as referring only to ‘strongly has goals’, so that ‘coherence arguments do not imply goal directed behavior’ (in combination with expecting coherence arguments to be in the weak->strong part of the argument) sounds like ‘coherence arguments do not get you from ‘weakly has goals’ to ‘strongly has goals’.
I also think separating out the step from no goal direction to weak, and weak to strong might be helpful in clarity. It sounded to me like you were considering an argument from ‘any kind of agent’ to ‘strong goal directed’ and finding it lacking, and I was like ‘but any kind of agent includes a mix of those that this force will work on, and those it won’t, so shouldn’t it be a partial/probabilistic move toward goal direction?’ Whereas you were just meaning to talk about what fraction of existing things are weakly goal directed.
Maybe changing the title would prime people less to have the wrong interpretation? E.g., to ‘Coherence arguments require that the system care about something’.
Even just ‘Coherence arguments do not entail goal-directed behavior’ might help, since colloquial “imply” tends to be probabilistic, but you mean math/logic “imply” instead. Or ‘Coherence theorems do not entail goal-directed behavior on their own’.
A few quick thoughts on reasons for confusion:
I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)
It also seems natural to think of ‘weakly has goals’ as something other than ‘goal directed’, and ‘goal directed’ as referring only to ‘strongly has goals’, so that ‘coherence arguments do not imply goal directed behavior’ (in combination with expecting coherence arguments to be in the weak->strong part of the argument) sounds like ‘coherence arguments do not get you from ‘weakly has goals’ to ‘strongly has goals’.
I also think separating out the step from no goal direction to weak, and weak to strong might be helpful in clarity. It sounded to me like you were considering an argument from ‘any kind of agent’ to ‘strong goal directed’ and finding it lacking, and I was like ‘but any kind of agent includes a mix of those that this force will work on, and those it won’t, so shouldn’t it be a partial/probabilistic move toward goal direction?’ Whereas you were just meaning to talk about what fraction of existing things are weakly goal directed.
Thanks, that’s helpful. I’ll think about how to clarify this in the original post.
Maybe changing the title would prime people less to have the wrong interpretation? E.g., to ‘Coherence arguments require that the system care about something’.
Even just ‘Coherence arguments do not entail goal-directed behavior’ might help, since colloquial “imply” tends to be probabilistic, but you mean math/logic “imply” instead. Or ‘Coherence theorems do not entail goal-directed behavior on their own’.