So people pretend to care about others because this might cause others to actually try to help them? It’s a plausible theory of human behavior, but seems awfully complicated to describe the mental processes of people we are all but explicitly told are too stupid to consistently implement their preferences.
In other words, there could actually be a reason that people think caring is the right thing to do, but trivial inconveniences and other errors of thinking prevent them from actually doing what they really think is right. This seems like a better description of most folks’ mental processes than “doesn’t care, and knows it”—which is the implication I get from the response sentence.
I would say the likeliest explanation is that people do care, but only insofar as it enables them to signal that they care. Caring much farther than that is pretty much pointless, from an evolutionary perspective, and probably actively detrimental.
Unless, of course, the machinery for caring is much simpler when it’s simply “care” vs “not care”. Pretending to care could be a much more complicated neurological adaptation that would be more wasteful than just implementing a nice “Sympathy” subsystem.
I mean, the way humans model each other’s behavior is by looking at our own self in other people’s scenarios, and then making minor adjustments for accuracy’s sake, since they think a little differently. I mean, why would you invent an entire subsystem just for understanding other people? That’s insane! YOU HAVE AN ENTIRE BRAIN ALREADY, AND EVERYBODY’S BRAIN IS REALLY DAMN SIMILAR, RIGHT?
Now, once you have this “self modeling and adjustment” system in place, actual caring makes a lot of sense. oh, pretending to care about your family and tribe is useful? here, i’ll just slap on an extra module here. we’ll call it Sympathy. It kinda works like this: you’v got that model of your brother you use to predict his behavior, right? It’s running on the same hardware YOUR mind is, and you care about YOUR mind, right? so we’ll just switch that “care” knob to the on position for your bro, alright? There. ha. now he’ll respond by viewing you as a valuable resource!
That “Really Intelligent Sociopath Who Accurately Predicts Every Scenario In Which Pretending To Care Is Useful And Rarely Makes Wasteful Mistakes Regarding That Decision” module seems a little bit costly to implement. Honestly, it’s just easier to actually freaking care about other people, albeit in a silly, ignorantly applied human way.
(I side with Harry on this one, in case you couldn’t tell.)
Right, but how would they even know that caring is the thing they’re supposed to pretend to do?
Because if you care about someone else (i.e. put a value on protecting and aiding that person), you become a resource worth preserving to that person.
So people pretend to care about others because this might cause others to actually try to help them? It’s a plausible theory of human behavior, but seems awfully complicated to describe the mental processes of people we are all but explicitly told are too stupid to consistently implement their preferences.
In other words, there could actually be a reason that people think caring is the right thing to do, but trivial inconveniences and other errors of thinking prevent them from actually doing what they really think is right. This seems like a better description of most folks’ mental processes than “doesn’t care, and knows it”—which is the implication I get from the response sentence.
I would say the likeliest explanation is that people do care, but only insofar as it enables them to signal that they care. Caring much farther than that is pretty much pointless, from an evolutionary perspective, and probably actively detrimental.
Unless, of course, the machinery for caring is much simpler when it’s simply “care” vs “not care”. Pretending to care could be a much more complicated neurological adaptation that would be more wasteful than just implementing a nice “Sympathy” subsystem.
I mean, the way humans model each other’s behavior is by looking at our own self in other people’s scenarios, and then making minor adjustments for accuracy’s sake, since they think a little differently. I mean, why would you invent an entire subsystem just for understanding other people? That’s insane! YOU HAVE AN ENTIRE BRAIN ALREADY, AND EVERYBODY’S BRAIN IS REALLY DAMN SIMILAR, RIGHT?
Now, once you have this “self modeling and adjustment” system in place, actual caring makes a lot of sense. oh, pretending to care about your family and tribe is useful? here, i’ll just slap on an extra module here. we’ll call it Sympathy. It kinda works like this: you’v got that model of your brother you use to predict his behavior, right? It’s running on the same hardware YOUR mind is, and you care about YOUR mind, right? so we’ll just switch that “care” knob to the on position for your bro, alright? There. ha. now he’ll respond by viewing you as a valuable resource!
That “Really Intelligent Sociopath Who Accurately Predicts Every Scenario In Which Pretending To Care Is Useful And Rarely Makes Wasteful Mistakes Regarding That Decision” module seems a little bit costly to implement. Honestly, it’s just easier to actually freaking care about other people, albeit in a silly, ignorantly applied human way.
(I side with Harry on this one, in case you couldn’t tell.)
I dunno.