Yet I find myself lost when trying to find more information about this concept of care. It is mentioned in both the chapter on Heidegger in The History of Philosophy and the section on care in the SEP article on Heidegger, but I don’t get a single thing written there. I think the ideas of “thrownness” and “disposedness” are related?
Do you have specific pointers to deeper discussions of this concept? Specifically, I’m interested in new intuitions for how a goal is revealed by actions.
First, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside.
Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal; purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness, and I think it doesn’t go well. So I’d instead propose we think of these terms differently; I prefer to think of purposeful behavior as that which creates a reduction in entropy within the system and its outputs and purposeless behavior does not.
They then go on to divide purposeful behavior into teleological and non-teleological behavior, by which they simply mean behavior that’s the result of feedback (and they specify negative feedback) or not. In LessWrong terms, I’d say this is like the difference between optimizers (“fitness maximizers”) and adaptation executors.
They then go on to make a few additional distinctions that are not relevant to the present topic although do have some relevance to AI alignment relating to predictability of systems.
I’d say then that systems with active, purposeful, teleological behavior are the ones that “care”, and the teleological mechanism is the aspect of the system by which a system is made to care.
Doing a little digging, I realized that the idea of “teleological mechanism” from cybernetics is probably a better handle for the idea and will provide a more accessible presentation of the idea. Some decent references:
I don’t know of anywhere that presents the idea quite how I think of it, though. If you read Dreyfus on Heidegger you might manage to pick this out. Similarly I think this idea underlies Sartre’s talk about freedom, but I can’t recall that he explicitly makes the connection in the way I would. To the best of my knowledge philosophers have unfortunately not said enough about this topic because it’s omnipresent in humans and often something that comes up incidentally to considering other things but not something deeply explored for its own sake except when people are confused (cg. Hegel on teleology).
Thanks for the proposed idea!
Yet I find myself lost when trying to find more information about this concept of care. It is mentioned in both the chapter on Heidegger in The History of Philosophy and the section on care in the SEP article on Heidegger, but I don’t get a single thing written there. I think the ideas of “thrownness” and “disposedness” are related?
Do you have specific pointers to deeper discussions of this concept? Specifically, I’m interested in new intuitions for how a goal is revealed by actions.
Okay, so here’s a more adequate follow up.
In this seminal cybernetics essay a way of thinking about this is layed out.
First, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside.
Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal; purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness, and I think it doesn’t go well. So I’d instead propose we think of these terms differently; I prefer to think of purposeful behavior as that which creates a reduction in entropy within the system and its outputs and purposeless behavior does not.
They then go on to divide purposeful behavior into teleological and non-teleological behavior, by which they simply mean behavior that’s the result of feedback (and they specify negative feedback) or not. In LessWrong terms, I’d say this is like the difference between optimizers (“fitness maximizers”) and adaptation executors.
They then go on to make a few additional distinctions that are not relevant to the present topic although do have some relevance to AI alignment relating to predictability of systems.
I’d say then that systems with active, purposeful, teleological behavior are the ones that “care”, and the teleological mechanism is the aspect of the system by which a system is made to care.
Doing a little digging, I realized that the idea of “teleological mechanism” from cybernetics is probably a better handle for the idea and will provide a more accessible presentation of the idea. Some decent references:
https://www.jstor.org/stable/184878
https://www.jstor.org/stable/2103479
https://nyaspubs.onlinelibrary.wiley.com/toc/17496632/50/4
I don’t know of anywhere that presents the idea quite how I think of it, though. If you read Dreyfus on Heidegger you might manage to pick this out. Similarly I think this idea underlies Sartre’s talk about freedom, but I can’t recall that he explicitly makes the connection in the way I would. To the best of my knowledge philosophers have unfortunately not said enough about this topic because it’s omnipresent in humans and often something that comes up incidentally to considering other things but not something deeply explored for its own sake except when people are confused (cg. Hegel on teleology).