This makes me think of Daniel Dennett’s reasons. What he argues is that many systems have reasons to do what they do, that come from design, where design can either be explicit design by humans or implicit design by stuff like evolution. And the big idea coming back to Darwing is that you can have reasons without comprehension: a system (for example an animal or the rabbit mold) can have reasons for specific behaviors that were designed (by evolution or by humans), without the ability to understand these reasons and adapt the behavior in consequence.
Yet this hard-coding of reasons seems like it has limits. Or more generally, for systems that needs to adapt to a wide enough range of tasks, adding understanding (comprehension) to the mix is way more efficient. So I do expect something like transformative AI to have some modicum of flexible intelligence, if only because this will make it better at the really complex tasks (like language) than the competition.
Actually, I would say that this is more about the design stance than the intentional stance. For example, the rabbit mold becomes easier to understand by going from the physical stance to the design stance, but not by going from the design stance to the intentional stance.
That being said, Dennett is pretty liberal with his definition of intentional systems, which can encompass pretty much anything that can be predicted through the intentional stance (whether it’s useful compared to the design stance or not).
But to go back to the topic of the post, even Dennett kind of agrees that systems without comprehension, or with a limited amount of comprehension, of their reasons, are less intentional that systems that understand their own reasons.
I didn’t understand (remember) Dennett’s distinctions between the design and intentional stances. I was thinking that design is a feature or part of intentional systems, e.g. a rabbit mold or the intricate structure of a (living) rabbit’s leg. They both seem to be for some purpose.
Maybe I was conflating the two because of the idea that a sufficiently complicated design might seem (or even be usefully modeled as) intentional? Like thinking of Nature as an intentional system designing rabbits (and people that then design rabbit molds).
This makes me think of Daniel Dennett’s reasons. What he argues is that many systems have reasons to do what they do, that come from design, where design can either be explicit design by humans or implicit design by stuff like evolution. And the big idea coming back to Darwing is that you can have reasons without comprehension: a system (for example an animal or the rabbit mold) can have reasons for specific behaviors that were designed (by evolution or by humans), without the ability to understand these reasons and adapt the behavior in consequence.
Yet this hard-coding of reasons seems like it has limits. Or more generally, for systems that needs to adapt to a wide enough range of tasks, adding understanding (comprehension) to the mix is way more efficient. So I do expect something like transformative AI to have some modicum of flexible intelligence, if only because this will make it better at the really complex tasks (like language) than the competition.
Yes, that’s a great intuition pump of his – the ‘intentional stance’, i.e. many systems act as if they had reasons or purpose.
Actually, I would say that this is more about the design stance than the intentional stance. For example, the rabbit mold becomes easier to understand by going from the physical stance to the design stance, but not by going from the design stance to the intentional stance.
That being said, Dennett is pretty liberal with his definition of intentional systems, which can encompass pretty much anything that can be predicted through the intentional stance (whether it’s useful compared to the design stance or not).
But to go back to the topic of the post, even Dennett kind of agrees that systems without comprehension, or with a limited amount of comprehension, of their reasons, are less intentional that systems that understand their own reasons.
I didn’t understand (remember) Dennett’s distinctions between the design and intentional stances. I was thinking that design is a feature or part of intentional systems, e.g. a rabbit mold or the intricate structure of a (living) rabbit’s leg. They both seem to be for some purpose.
After skimming the Wikipedia article on the intentional stance I realized I was thinking of ‘design stance’ as you correctly pointed out.
Maybe I was conflating the two because of the idea that a sufficiently complicated design might seem (or even be usefully modeled as) intentional? Like thinking of Nature as an intentional system designing rabbits (and people that then design rabbit molds).