Should I have used a different word? Probably! But I will now proceed to a complex justification of my word choice anyway!
A lot of philosophy seems to be coming up with explicit definitions that fit our implicit mental categories—see Luke’s post on conceptual analysis (which I might be misunderstanding). Part of this project is the hope that our implicit mental categories are genuinely based off, or correspond to, an explicit algorithmizable definition. For example, one facet of utilitarianism is the hope that the principle of utility is a legitimate algorithmization of our fuzzy mental concept of “moral”.
This kind of philosophy usually ends up in a give-and-take, where for example Plato defines Man as a featherless biped, and Diogenes says that a plucked chicken meets the definition. Part of what Diogenes is doing is saying that if Plato’s definition were identical to our implicit mental category, we would implicitly common-sensically identify a chicken as human. But we implicitly common-sensically recognize a chicken is not human, therefore our minds cannot be working off the definition “featherless biped”.
This is the link between defining and predicting. Plato has proposed a theory, that when the mind evaluates humanity, it uses a featherless-biped detector. Diogenes is pointing out Plato’s theory makes a false prediction: that people implicitly recognize chickens as humans. This disproves Plato’s theory, and so the definition is wrong.
I suppose this must be my mental concept of what we’re doing when defining a term like “self”, which is what impels me to use “define” and “predict” in similar ways.
Was the irony intentional? If not that is just priceless!
Humans being what they are, when they define things it will inevitably tend to influence what predictions they make. Where a boundedly rational agent prescribed a terrible definition would be merely less efficient a human will also end up with biased predictions when reasoning from the prediction. Also, as you note, declaring a definition can sometimes imply that a prediction is likely to be made that the definition matches the mental concept while also carving reality effectively at it’s joints.
The above being the case definitions can and should be dismissed as wrong. This is definitely related to the predictions that accompany them. This is approximately a representation of the non-verbal reasoning that flashed through my mind prompting my own rejection of the ‘self as future folks you care about and can influence’ definition. It is also what flashes through my mind when I reject why I must reject any definition of ‘define’ and ‘predict’ which doesn’t keep the two words distinct. Just because ‘human’ is closely related to ‘featherless biped’ it doesn’t mean they are the same thing!
I suppose this must be my mental concept of what we’re doing when defining a term like “self”, which is what impels me to use “define” and “predict” in similar ways.
Just so long as you don’t mind if you mislabel a whole lot of plucked chickens.
Understanding the various relationships between definitions and predictions is critical for anyone trying to engage in useful philosophy. But it isn’t helpful just to mush the two concepts together. Instead we can let our understanding the predictions involved govern how we go about proposing and using definitions.
I don’t agree the definition is terrible. I agree it’s incomplete. My point boils down to: we should include “caring about” in our intuitive definition of future selves, rather than using other definition and wondering if we can deduce caring from that. Humans do generally care about their future selves, so if we ommit that from the definition, we’re talking about something else.
Predictions seem to be a different thing in nature to definitions. The definition is terrible but it, well, by definition doesn’t make a prediction.
Should I have used a different word? Probably! But I will now proceed to a complex justification of my word choice anyway!
A lot of philosophy seems to be coming up with explicit definitions that fit our implicit mental categories—see Luke’s post on conceptual analysis (which I might be misunderstanding). Part of this project is the hope that our implicit mental categories are genuinely based off, or correspond to, an explicit algorithmizable definition. For example, one facet of utilitarianism is the hope that the principle of utility is a legitimate algorithmization of our fuzzy mental concept of “moral”.
This kind of philosophy usually ends up in a give-and-take, where for example Plato defines Man as a featherless biped, and Diogenes says that a plucked chicken meets the definition. Part of what Diogenes is doing is saying that if Plato’s definition were identical to our implicit mental category, we would implicitly common-sensically identify a chicken as human. But we implicitly common-sensically recognize a chicken is not human, therefore our minds cannot be working off the definition “featherless biped”.
This is the link between defining and predicting. Plato has proposed a theory, that when the mind evaluates humanity, it uses a featherless-biped detector. Diogenes is pointing out Plato’s theory makes a false prediction: that people implicitly recognize chickens as humans. This disproves Plato’s theory, and so the definition is wrong.
I suppose this must be my mental concept of what we’re doing when defining a term like “self”, which is what impels me to use “define” and “predict” in similar ways.
Was the irony intentional? If not that is just priceless!
Humans being what they are, when they define things it will inevitably tend to influence what predictions they make. Where a boundedly rational agent prescribed a terrible definition would be merely less efficient a human will also end up with biased predictions when reasoning from the prediction. Also, as you note, declaring a definition can sometimes imply that a prediction is likely to be made that the definition matches the mental concept while also carving reality effectively at it’s joints.
The above being the case definitions can and should be dismissed as wrong. This is definitely related to the predictions that accompany them. This is approximately a representation of the non-verbal reasoning that flashed through my mind prompting my own rejection of the ‘self as future folks you care about and can influence’ definition. It is also what flashes through my mind when I reject why I must reject any definition of ‘define’ and ‘predict’ which doesn’t keep the two words distinct. Just because ‘human’ is closely related to ‘featherless biped’ it doesn’t mean they are the same thing!
Just so long as you don’t mind if you mislabel a whole lot of plucked chickens.
Understanding the various relationships between definitions and predictions is critical for anyone trying to engage in useful philosophy. But it isn’t helpful just to mush the two concepts together. Instead we can let our understanding the predictions involved govern how we go about proposing and using definitions.
I don’t agree the definition is terrible. I agree it’s incomplete. My point boils down to: we should include “caring about” in our intuitive definition of future selves, rather than using other definition and wondering if we can deduce caring from that. Humans do generally care about their future selves, so if we ommit that from the definition, we’re talking about something else.