If millions of trolleys are about and millions of people self-sacrifice to fix them then suicidal fixing can be a valid policy. Baneling ants exist and are selected for.
The impulse to value self-sacrife might come form the default position that people are very good at looking after their own interest. So at a coarse level any “self-detrimenal” effect is likely to come from a complicated or abstract moral reasoning. But then there is the identity blind kind of reasoning. If you think that people that help others should not be tired all the time, if person A helps others and is tired you should arrange for their relaxation. This remains true if person A is yourself. But the basic instinct is to favour giving yourself a break becuase it is hedonistically pleasing. But the reasoning of persons in your position should arrange their stuff in a certain way is a kind of “cold” basis for possibly the same outcome.
A policy that good people should suicide just becuase they are good is very terrible policy. But the flip side is that some bad people will pay unspeakble costs to gain real survivalship percentages. People have a right to life even in an extended “smaller things than life-and-death” way. But life can be overvalued and most real actions carry a slight chance of death.
Then there is the issue of private matters versus public matters. If there is a community of 1000 people that has one shared issue involving the life and death of 100 people and each has private matter involving 1 different person, then via one logic everybody sticking to their own business saves 1000 people vs 100 and via another way a person doing public work over private work saves 100 people vs 1 person. However if 100 persons do public work at the cost of their private work then it is a choice of between 100 vs 100 people. Each of those can think they are being super efficient 100:1 heroes. And those that choose a select few close ones can seem like super ineffcient 1:100 ones.
Your last paragraph doesn’t make much sense to me. I think you need to specify how much needs to be done in order to resolve that one shared issue. If it requires the same investment from all 1000 people as they’d have put into saving those single individual lives, then it’s 1000 people versus 100 people and they should do the individual thing. If it requires just one person to do it, then (provided there’s some way of selecting that person) it’s 1 person versus 100 people and someone should do the shared thing. If it requires 100 people to do it, then as you say it’s a choice of 100 versus 100 and other considerations besides “how many people saved?” will dominate. But none of this is really about private versus public, and whether someone’s being efficient or inefficient in making a particular choice depends completely on that how-much-needs-to-be-done question that you left unspecified.
(There are public-versus-private issues, and once you nail down how much public effort it takes to resolve the shared issue then they become relevant. Coordination is hard! Public work is more visible and may motivate others! People care more about people close to them! Etc., etc.)
Why is it mandatory? What happens if I don’t specify?
I wrote it as weighing the importance but I had an incling it is more of a progress about how much is done. If one has access to accurate effort information then utilitarian calculus is easy. However sometimes there are uncertainties about them and some logics do not require or access this information. Or like you know exactly how cool it would be to be on the moon but you don’t have an idea whether it is expensive or super duper expensive and you need to undertake a research program during which the costs clear up. Or you could improve healthcare or increase equanimity of justice. So does that mean that because cost are harder to estimate in one field vs other fields, predictable costs get selected over more nebulous ones? Decisions under big cost uncertainty and difficulty in comparing values are not super rare. But still a principle of “if you use a lot of resources for something it better be laudable in some sense” survives.
For example in the case that an effective selection mechanism is not found there is danger that 1 person actually does the job, 1 tries to help but is only half effective and 98 people stand and watch as the two try to struggle. In the other direction high probablity of being a useless bystander might make that 0 people attempt the job. If everybody just treated jobs as jobs without distintion on how many others might try it the jobs with most “visiblity” will likely be overcrowded or overcrowded relative to their otherwise importance. In a way what has sometimes been described as a “bias” dilution of responcibility can be seen as a hack / heuristic to solve the situation. It tries to balance so that in a typical size crowd the expected amount of people taking action is a small finite number, by raising the bar to action according to how big a crowd you are in. It is a primitive kind of coordination but even that helps a lot.
Overtly sacrifical behaviour could be analysed as giving way too much more importance to other peoples worries, that is removing the dilution of responciblity without replacing it with anything more advanced. Somebody that tries to help everybody in a village will as a small detail spend a lot of time salesmanning across the village and the transit time alone might cut into the efficiency even before considering factors like greater epistemological distance (you spend a lot of time interviewing people whether they are fine or not) and not being fit for every kind of need (you might be good at carpentry but that one requires masonry). Taking these somewhat arbitrary effects effectively into account you could limit yourself to a small geographical area (less travelling), do stuff only upon request (people need to know what their needs are) or only do stuff you know how to do (do the carpentry for the whole country but no masonry for anyone). All move into the direction that a need somebody has will go unaddressed by you personally.
Mandatory? It’s not mandatory. But if you don’t specify then you’re making an argument with vital bits missing.
I agree that utilitarian decision making (or indeed any decision making) is harder when you don’t have all the information about e.g. how much effort something takes.
I also agree that in practice we likely get more efficiency if people care more about themselves and others near to them than about random people further away.
Welll the specification would be “jobs of roughly equal effort” which I guess I left implicit in a bad way.
I think you are arguing that the essence will depend on the efficiency ratios but I think the shared vs not-shared property will overwhelm efficiency considerations. That is if job efficiency varies between 0.1 and 10 and the populations are around 10000 and 100000 then 1000 public effort lives at typical bad efficiency will seem comparable to 1 private life at good efficiency while at population level doing the private option at bad efficiency would be comparable to getting the public option done. Thus any issue affecting the “whole” community will overwhelm any private option.
It is crucial that the public task is finite and shared. If you could start up independent “benefit all” extra projects (and get them done alone) the calculus would be right. One could try point ot the error also via “marginal result” in that yes it is an issue of 1000 lives but if your participation doesn’t make or break the project then it is of zero impact. So one should be indifferent rather than thinking it is the utmost importance. If it can partially succeed then the impact is the increase in success not the total success. Yet when you think stuff like “hungry people in africa” your mind probably refers to the total issue/success.
If I am asking what is the circumference of a circle at lot of people would accept pi as the answer. Somebody could insist that I tell the radius as essential information to determine how long the circumference would be. Efficiency is not essential to the phenomenon that I try to point out.
If millions of trolleys are about and millions of people self-sacrifice to fix them then suicidal fixing can be a valid policy. Baneling ants exist and are selected for.
The impulse to value self-sacrife might come form the default position that people are very good at looking after their own interest. So at a coarse level any “self-detrimenal” effect is likely to come from a complicated or abstract moral reasoning. But then there is the identity blind kind of reasoning. If you think that people that help others should not be tired all the time, if person A helps others and is tired you should arrange for their relaxation. This remains true if person A is yourself. But the basic instinct is to favour giving yourself a break becuase it is hedonistically pleasing. But the reasoning of persons in your position should arrange their stuff in a certain way is a kind of “cold” basis for possibly the same outcome.
A policy that good people should suicide just becuase they are good is very terrible policy. But the flip side is that some bad people will pay unspeakble costs to gain real survivalship percentages. People have a right to life even in an extended “smaller things than life-and-death” way. But life can be overvalued and most real actions carry a slight chance of death.
Then there is the issue of private matters versus public matters. If there is a community of 1000 people that has one shared issue involving the life and death of 100 people and each has private matter involving 1 different person, then via one logic everybody sticking to their own business saves 1000 people vs 100 and via another way a person doing public work over private work saves 100 people vs 1 person. However if 100 persons do public work at the cost of their private work then it is a choice of between 100 vs 100 people. Each of those can think they are being super efficient 100:1 heroes. And those that choose a select few close ones can seem like super ineffcient 1:100 ones.
Your last paragraph doesn’t make much sense to me. I think you need to specify how much needs to be done in order to resolve that one shared issue. If it requires the same investment from all 1000 people as they’d have put into saving those single individual lives, then it’s 1000 people versus 100 people and they should do the individual thing. If it requires just one person to do it, then (provided there’s some way of selecting that person) it’s 1 person versus 100 people and someone should do the shared thing. If it requires 100 people to do it, then as you say it’s a choice of 100 versus 100 and other considerations besides “how many people saved?” will dominate. But none of this is really about private versus public, and whether someone’s being efficient or inefficient in making a particular choice depends completely on that how-much-needs-to-be-done question that you left unspecified.
(There are public-versus-private issues, and once you nail down how much public effort it takes to resolve the shared issue then they become relevant. Coordination is hard! Public work is more visible and may motivate others! People care more about people close to them! Etc., etc.)
Why is it mandatory? What happens if I don’t specify?
I wrote it as weighing the importance but I had an incling it is more of a progress about how much is done. If one has access to accurate effort information then utilitarian calculus is easy. However sometimes there are uncertainties about them and some logics do not require or access this information. Or like you know exactly how cool it would be to be on the moon but you don’t have an idea whether it is expensive or super duper expensive and you need to undertake a research program during which the costs clear up. Or you could improve healthcare or increase equanimity of justice. So does that mean that because cost are harder to estimate in one field vs other fields, predictable costs get selected over more nebulous ones? Decisions under big cost uncertainty and difficulty in comparing values are not super rare. But still a principle of “if you use a lot of resources for something it better be laudable in some sense” survives.
For example in the case that an effective selection mechanism is not found there is danger that 1 person actually does the job, 1 tries to help but is only half effective and 98 people stand and watch as the two try to struggle. In the other direction high probablity of being a useless bystander might make that 0 people attempt the job. If everybody just treated jobs as jobs without distintion on how many others might try it the jobs with most “visiblity” will likely be overcrowded or overcrowded relative to their otherwise importance. In a way what has sometimes been described as a “bias” dilution of responcibility can be seen as a hack / heuristic to solve the situation. It tries to balance so that in a typical size crowd the expected amount of people taking action is a small finite number, by raising the bar to action according to how big a crowd you are in. It is a primitive kind of coordination but even that helps a lot.
Overtly sacrifical behaviour could be analysed as giving way too much more importance to other peoples worries, that is removing the dilution of responciblity without replacing it with anything more advanced. Somebody that tries to help everybody in a village will as a small detail spend a lot of time salesmanning across the village and the transit time alone might cut into the efficiency even before considering factors like greater epistemological distance (you spend a lot of time interviewing people whether they are fine or not) and not being fit for every kind of need (you might be good at carpentry but that one requires masonry). Taking these somewhat arbitrary effects effectively into account you could limit yourself to a small geographical area (less travelling), do stuff only upon request (people need to know what their needs are) or only do stuff you know how to do (do the carpentry for the whole country but no masonry for anyone). All move into the direction that a need somebody has will go unaddressed by you personally.
Mandatory? It’s not mandatory. But if you don’t specify then you’re making an argument with vital bits missing.
I agree that utilitarian decision making (or indeed any decision making) is harder when you don’t have all the information about e.g. how much effort something takes.
I also agree that in practice we likely get more efficiency if people care more about themselves and others near to them than about random people further away.
Welll the specification would be “jobs of roughly equal effort” which I guess I left implicit in a bad way.
I think you are arguing that the essence will depend on the efficiency ratios but I think the shared vs not-shared property will overwhelm efficiency considerations. That is if job efficiency varies between 0.1 and 10 and the populations are around 10000 and 100000 then 1000 public effort lives at typical bad efficiency will seem comparable to 1 private life at good efficiency while at population level doing the private option at bad efficiency would be comparable to getting the public option done. Thus any issue affecting the “whole” community will overwhelm any private option.
It is crucial that the public task is finite and shared. If you could start up independent “benefit all” extra projects (and get them done alone) the calculus would be right. One could try point ot the error also via “marginal result” in that yes it is an issue of 1000 lives but if your participation doesn’t make or break the project then it is of zero impact. So one should be indifferent rather than thinking it is the utmost importance. If it can partially succeed then the impact is the increase in success not the total success. Yet when you think stuff like “hungry people in africa” your mind probably refers to the total issue/success.
If I am asking what is the circumference of a circle at lot of people would accept pi as the answer. Somebody could insist that I tell the radius as essential information to determine how long the circumference would be. Efficiency is not essential to the phenomenon that I try to point out.