To me the solution to this problem is to not rely too much on raw consequentialism for dealing with real-life situation. Because I know my model of the world is in perfect, that I lack computing power to track all the consequences of an action and evaluate their utility, because I don’t even know my own utility function precisely.
So I’m trying to devise ethical rules that come partly from consequentialism, but also taking into consideration lessons learned from history, both my own personal experience and humanity’s history. And those rules for example say I should not kill someone, even if I think it’ll save 10 lives, because usually when you do that, either you kill the person and fail to save the 10 others, or you failed to think to a way to save the 10 without killing one, or you create far-reaching consequences that’ll at the end cost more than the 10 saved lives (for example, breaking the “don’t kill” taboo, and leading for people to follow your example even in cases when they’ll fail to save the 10 persons). That’s less optimal than using consequentialism wisely—but also much less error-prone, at least to me, than trying to wield a dangerous tool that I’m not smart/competent enough to wield.
That’s quite similar to the way we can’t use QM and GR to make planes, but we use simpler, “higher-level” laws, which are not as precise, but much more computable, and good enough for our needs. I acknowledge the core of physics is QM and GR, and the rest are just approximations, but we use the approximations because we can’t wield raw QM and GR for most daily life problems. And I acknowledge consequentialism to be the core of ethics, but I do think we need approximations, because neither can we wield directly consequentialism in real life.
More precisely, the core of our current best available (but still known to be flawed) physics are QM and GR and we do not even have a consistent model fully incorporating both.
Furthermore, we can’t model anything more complicated then a hydrogen atom with QM without resorting to approximations, and by the time you get to something as complicated as bulk matter or atomic nuclei of heavy elements, we can’t even verify that the predictions of QM are what we in fact observe.
We have some plans (including a few radically different from everything we are used to—which is good) how to build a model. I wouldn’t call these plans models of anything yet, because QM and GR can help us predict the behaviour of precise tools we use, and these plans are not yet concrete enough to allow useful modelling.
I think the biggest problem with killing someone is that you’re likely to get arrested, which prevents you from saving hundreds of lives.
In general, the best way to get something done is to pay someone who’s better at it than you. As such, you can fairly accurately simplify it into thinking about how to earn money, and thinking about where to donate that money. These are generally things you can think about once, rather than think about on a case-by-case basis.
That specialization of labor does a lot of help doesn’t mean that extreme specialization still does a lot of help. There are so many issues involved with letting someone else do something for you (finding/chosing the person, trust, explaining what you have to do, moving the person to the place, schedule/calendar issues, negotiating the price, legal issues, …) that for many things, it’s less efficient to pay someone to do it than to do it yourself, even if for the core of the task, a specialized person would be more efficient.
Also, you’ve to consider willpower/akrasia/enjoyment related issues. For example, many people will feel much more motivated when fixing your own house than when fixing someone else house, so even if you need more time to do it than a professional, you could still fill better doing it yourself, than working (even less) extra hours in your job and paying someone to do it. Oversimplifying things like that just doesn’t work in real life.
And finally, you’ve to consider emergency situations. Trolley like situations are emergency situations, like if you see someone being mugged, or someone drowning of whatever, in those, you just don’t have the option to pay someone to act for you.
There are so many issues involved with letting someone else do something for you (finding/chosing the person, trust, explaining what you have to do, moving the person to the place, schedule/calendar issues, negotiating the price, legal issues, …) that for many things, it’s less efficient to pay someone to do it than to do it yourself
There would be vastly more things like this if specialization wasn’t normal. That’s what I meant when I said that it works better the more it’s done. There are things it’s better to do yourself, but most things aren’t like that.
And finally, you’ve to consider emergency situations.
The benefits from a situation not covered by the rule I gave earlier are very small. If you’re in a situation where acting would be remotely dangerous, don’t. If acting would be perfectly safe, go ahead. If it would be very slightly dangerous, then you’re likely to be better off doing a Fermi calculation.
In general, the best way to get something done is to pay someone who’s better at it than you. As such, you can fairly accurately simplify it into thinking about how to earn money
I’m not sure this generalizes well—would this work if everybody was doing it? (It might).
If you know that will be a problem, I think you’re smart enough to figure out that you have to do something you’re interested in. If not, you’re not going to come up with this as a guideline in the first place.
To me the solution to this problem is to not rely too much on raw consequentialism for dealing with real-life situation. Because I know my model of the world is in perfect, that I lack computing power to track all the consequences of an action and evaluate their utility, because I don’t even know my own utility function precisely.
So I’m trying to devise ethical rules that come partly from consequentialism, but also taking into consideration lessons learned from history, both my own personal experience and humanity’s history. And those rules for example say I should not kill someone, even if I think it’ll save 10 lives, because usually when you do that, either you kill the person and fail to save the 10 others, or you failed to think to a way to save the 10 without killing one, or you create far-reaching consequences that’ll at the end cost more than the 10 saved lives (for example, breaking the “don’t kill” taboo, and leading for people to follow your example even in cases when they’ll fail to save the 10 persons). That’s less optimal than using consequentialism wisely—but also much less error-prone, at least to me, than trying to wield a dangerous tool that I’m not smart/competent enough to wield.
That’s quite similar to the way we can’t use QM and GR to make planes, but we use simpler, “higher-level” laws, which are not as precise, but much more computable, and good enough for our needs. I acknowledge the core of physics is QM and GR, and the rest are just approximations, but we use the approximations because we can’t wield raw QM and GR for most daily life problems. And I acknowledge consequentialism to be the core of ethics, but I do think we need approximations, because neither can we wield directly consequentialism in real life.
More precisely, the core of our current best available (but still known to be flawed) physics are QM and GR and we do not even have a consistent model fully incorporating both.
Furthermore, we can’t model anything more complicated then a hydrogen atom with QM without resorting to approximations, and by the time you get to something as complicated as bulk matter or atomic nuclei of heavy elements, we can’t even verify that the predictions of QM are what we in fact observe.
Very true, but we can test at least some multiple-particle predictions by attempting to build a small quantum computer
From what I understand, we have more than one. We just don’t know which, if any, is correct.
We have some plans (including a few radically different from everything we are used to—which is good) how to build a model. I wouldn’t call these plans models of anything yet, because QM and GR can help us predict the behaviour of precise tools we use, and these plans are not yet concrete enough to allow useful modelling.
And some of them have so damn many free parameters that it would be hard to rule them out but they have hardly any predictive power.
I think the biggest problem with killing someone is that you’re likely to get arrested, which prevents you from saving hundreds of lives.
In general, the best way to get something done is to pay someone who’s better at it than you. As such, you can fairly accurately simplify it into thinking about how to earn money, and thinking about where to donate that money. These are generally things you can think about once, rather than think about on a case-by-case basis.
That specialization of labor does a lot of help doesn’t mean that extreme specialization still does a lot of help. There are so many issues involved with letting someone else do something for you (finding/chosing the person, trust, explaining what you have to do, moving the person to the place, schedule/calendar issues, negotiating the price, legal issues, …) that for many things, it’s less efficient to pay someone to do it than to do it yourself, even if for the core of the task, a specialized person would be more efficient.
Also, you’ve to consider willpower/akrasia/enjoyment related issues. For example, many people will feel much more motivated when fixing your own house than when fixing someone else house, so even if you need more time to do it than a professional, you could still fill better doing it yourself, than working (even less) extra hours in your job and paying someone to do it. Oversimplifying things like that just doesn’t work in real life.
And finally, you’ve to consider emergency situations. Trolley like situations are emergency situations, like if you see someone being mugged, or someone drowning of whatever, in those, you just don’t have the option to pay someone to act for you.
There would be vastly more things like this if specialization wasn’t normal. That’s what I meant when I said that it works better the more it’s done. There are things it’s better to do yourself, but most things aren’t like that.
The benefits from a situation not covered by the rule I gave earlier are very small. If you’re in a situation where acting would be remotely dangerous, don’t. If acting would be perfectly safe, go ahead. If it would be very slightly dangerous, then you’re likely to be better off doing a Fermi calculation.
I’m not sure this generalizes well—would this work if everybody was doing it? (It might).
Without specialization of labor, the world simply would not support this many people. Billions would die.
It generalizes well. In fact, the more people do it, the better it works.
But I wonder whether thinking “How can I earn money?” gets specialization of labor as well / better than thinking “What’s interesting to me?”
It might result in people trying and failing to do things that pay a lot, rather than try and succeed at things they’re well-suited for.
If you know that will be a problem, I think you’re smart enough to figure out that you have to do something you’re interested in. If not, you’re not going to come up with this as a guideline in the first place.