The AI doesn’t have a model of what it should do, the AI is the model.
This of course generalizes to say that beings don’t have models, they are the models in the sense that the model exists as a post hoc reification of what the being is.
I think you are tackling an import problem of intuitions here. Seeing that the votes on this post suggest a mixed reception (9 votes before I voted with a total score of 5), and given there are no other comments, I’m left to speculate about what people might dislike about this post.
My guesses are:
generally “booing” things that veer too far into Continental philosophy territory (via talk of intentionality)
disagreement that intentionality matters in AI
disagreement with the embodied/embedded approach that breaks down neat abstractions
some other dislike of how you wrote this
I want to encourage you to keep at it. I generally agree with you from what you’ve written in this vein so far, and I continue to sense a general approval of approaches to AI that treat the abstractions as absolute in ways that will dangerously break down due to them being leaky, so I view writing like yours here as an important corrective to changing intuitions such that they better reflect the embeddedness and interconnectedness of agents.
This of course generalizes to say that beings don’t have models, they are the models in the sense that the model exists as a post hoc reification of what the being is.
I think you are tackling an import problem of intuitions here. Seeing that the votes on this post suggest a mixed reception (9 votes before I voted with a total score of 5), and given there are no other comments, I’m left to speculate about what people might dislike about this post.
My guesses are:
generally “booing” things that veer too far into Continental philosophy territory (via talk of intentionality)
disagreement that intentionality matters in AI
disagreement with the embodied/embedded approach that breaks down neat abstractions
some other dislike of how you wrote this
I want to encourage you to keep at it. I generally agree with you from what you’ve written in this vein so far, and I continue to sense a general approval of approaches to AI that treat the abstractions as absolute in ways that will dangerously break down due to them being leaky, so I view writing like yours here as an important corrective to changing intuitions such that they better reflect the embeddedness and interconnectedness of agents.