The posters in this thread aren’t even trying to make a good faith attempt to examine these topics in an unbiased way. This seems to me to be a clear example of a group in interaction, sharing some common biases (Lenin bad, us better, “personal moral responsibility” must be defended as existing for us to make these status constructions) working overtime to try to hide the bias. I suppose as much from yourselves as any 3rd party.
I’d recommend a simpler approach. (1) We may or may not have individual agency. (2) We may or may not be capable of making choices, even though we may experience what feels like making choices, anguishing over choices, etc. Kids playing videogames on autoplay seem to experience what feels like making choices, too. (3) Let’s try to work together not to die -like Tim Russert just did- in the next 100 years, and onward. Let’s not try to save everyone alive. Let’s not try to save everyone who ever lived. Let’s not try to save everyone who will be born. Let’s focus on working together with those of us who want to persist and have something to contribute to the rest, and do our best to make it happen.
As for “moral responsibility”, with regards to evaluating how smart people treat each other it’s just a layer of straussian inefficiency, with regards to how smart people treat everybody else, it’s a costly status game smart people play with each other. Let’s reward status directly based on what a given person is doing to maximize persistence odds for the rest of us.
The posters in this thread aren’t even trying to make a good faith attempt to examine these topics in an unbiased way. This seems to me to be a clear example of a group in interaction, sharing some common biases (Lenin bad, us better, “personal moral responsibility” must be defended as existing for us to make these status constructions) working overtime to try to hide the bias. I suppose as much from yourselves as any 3rd party.
I’d recommend a simpler approach. (1) We may or may not have individual agency. (2) We may or may not be capable of making choices, even though we may experience what feels like making choices, anguishing over choices, etc. Kids playing videogames on autoplay seem to experience what feels like making choices, too. (3) Let’s try to work together not to die -like Tim Russert just did- in the next 100 years, and onward. Let’s not try to save everyone alive. Let’s not try to save everyone who ever lived. Let’s not try to save everyone who will be born. Let’s focus on working together with those of us who want to persist and have something to contribute to the rest, and do our best to make it happen.
As for “moral responsibility”, with regards to evaluating how smart people treat each other it’s just a layer of straussian inefficiency, with regards to how smart people treat everybody else, it’s a costly status game smart people play with each other. Let’s reward status directly based on what a given person is doing to maximize persistence odds for the rest of us.