The organizations one seems like an obvious collider—you got the list by selecting for something like “notability,” which is contributed to by both intelligence and coherence, and so on the sample it makes sense they’re anticorrelated.
But I think the rankings for animals/plants isn’t like that. Instead, it really seems to trade on what people mean by “coherence”—here I agree with Unnamed, it seems like “coherence” is getting interpreted as “simplicity of models that work pretty well to describe the thing,” even if those models don’t look like utility maximizers. Put an oak tree in a box with a lever that dispenses water, and it won’t pull the lever when it’s thirsty, but because the overall model that describes an oak tree is simpler than the model that describes a rat, it feels “coherent.” This is a fine way to use the word, but it’s not quite what’s relevant to arguments about AI.
Put an oak tree in a box with a lever that dispenses water, and it won’t pull the lever when it’s thirsty
I actually thought this was a super interesting question, just for general world modelling. The tree won’t pull a lever because it barely has the capability to do so and no prior that it might work, but it could, like, control a water dispenser via sap distribution to a particular branch. In that case will the tree learn to use it?
Ended up finding an article on attempts to show learned behavioural responses to stimuli in plants at On the Conditioning of Plants: A Review of Experimental Evidence—turns out there have been some positive results but they seem not to have replicated, as well as lots of negative results, so my guess is that no, even if they are given direct control, the tree won’t control its own water supply. More generally this would agree that plants lack the information processing systems to coherently use their tools.
Experiments are mostly done with M. pudica because it shows (fairly) rapid movement to close up its leaves when shaken.
The organizations one seems like an obvious collider—you got the list by selecting for something like “notability,” which is contributed to by both intelligence and coherence, and so on the sample it makes sense they’re anticorrelated.
But I think the rankings for animals/plants isn’t like that. Instead, it really seems to trade on what people mean by “coherence”—here I agree with Unnamed, it seems like “coherence” is getting interpreted as “simplicity of models that work pretty well to describe the thing,” even if those models don’t look like utility maximizers. Put an oak tree in a box with a lever that dispenses water, and it won’t pull the lever when it’s thirsty, but because the overall model that describes an oak tree is simpler than the model that describes a rat, it feels “coherent.” This is a fine way to use the word, but it’s not quite what’s relevant to arguments about AI.
I actually thought this was a super interesting question, just for general world modelling. The tree won’t pull a lever because it barely has the capability to do so and no prior that it might work, but it could, like, control a water dispenser via sap distribution to a particular branch. In that case will the tree learn to use it?
Ended up finding an article on attempts to show learned behavioural responses to stimuli in plants at On the Conditioning of Plants: A Review of Experimental Evidence—turns out there have been some positive results but they seem not to have replicated, as well as lots of negative results, so my guess is that no, even if they are given direct control, the tree won’t control its own water supply. More generally this would agree that plants lack the information processing systems to coherently use their tools.
Experiments are mostly done with M. pudica because it shows (fairly) rapid movement to close up its leaves when shaken.
Huh, really neat.