I think the following two propositions are different. 1. “Quick decisions are not made by something ‘self-like’”. 2. “Quick decisions are made in a way that has nothing to do with your ethics.” #1 is probably true, at least if “quick” is quick enough and “self-like” is narrow enough. But #2 doesn’t follow from it. In the sort of situation you describe, for sure you won’t be pondering the ethics of the situation—but that doesn’t mean that the lower-level systems that decide your actions have nothing to do with your ethics.
I don’t know for sure whether they do. (I wonder whether there’s been research on this.) But the following (related) things seem at least plausible to me. I am not claiming that any specific one is true, only that they all seem plausible, and that to whatever extent they’re true we should expect rapid decisions and ethics to be related.
Part of what “being good” is is having the bits of your brain that generate plans be less inclined to generate plans that involve harming people.
Part of what “being good” is is having what happens to other people be more salient, relative to what happens to you, in your internal plan-generating-and-assessing machinery.
[clarification: I don’t mean that being good means having other people matter more than yourself, I mean that since almost all of us notice our own interests much more readily than others, noticing others’ interests more generally goes along with being more “good”.]
If you try to make yourself a better person, part of what you’re trying to do is to retrain your mental planning machinery to pay more attention to other people.
It is in fact possible to do this: higher-level bits of you have some ability to reshape lower-level bits. (A trained artist sees the world differently from muggles.)
Even if in fact it’s not possible to do this, people for whom what happens to other people is more salient tend to become people who are “good” because when they reflect on what they care about, that’s what they see.
The more you care about other people’s welfare, the more the (somewhat conscious) process of learning how to drive (or how to do other things in which you might sometimes have quick decisions to make that affect others) will be directed towards not harming others.
As a separate but related point, avoiding accidents in which other people get hurt is not just a matter of what you do at the last moment. It’s also about what situations you, er, steer yourself towards on longer timescales. For instance, caring about the welfare of the driver in front of you will make you less inclined to drive very close behind them (because doing so will make them uncomfortable and may put them at more risk if something unexpected happens), which will bias those split-second decisions away from ones where more of the options involve colliding with the vehicle in front of you. And that is absolutely the sort of thing that autopilot systems can have different “opinions” about.
There was actually an example of that just recently. Some bit of Tesla’s automated driving stuff (I’m not sure whether it’s the ill-named “full self-driving”, or something that’s present on all their cars; I think the former) has three settings called something like “cautious”, “normal”, and “assertive”. If you select “assertive”, then when approaching a stop sign the car will not necessarily attempt to stop; rather, if it doesn’t detect other vehicles nearby it will slow down but keep going past the sign. It turns out that this is illegal (in the US; probably in other jurisdictions too, but the US is the one I heard about) and Tesla have just announced a recall[1] of tens of thousands of vehicles to make it not happen any more. Anyway, since Tesla’s ability to detect other vehicles is unfortunately less than perfect, this is a “strategic” choice that when made makes “tactical” emergency decisions more likely and more likely to involve harm to other people.
[1] Although this is the word they use, I think all it means in practice is that they’re notifying people with those cars and doing an over-the-air update to disable this behaviour.
This is besides the point, but your example about Tesla rolling back their autopilot assertiveness to comply with the law made me realize a hidden risk of automation: it makes laws actually enforceable. The perhaps most important civilizational defense against bureaucracy clogging up its arteries, common-sense non-compliance, is being removed. This is a terrible precedent and made self-driving technology much less appealing to me all of a sudden.
There are some sense here that tend to be entangled that can be hard to tear appart. This might be shooting of in a tangent.
In Magic the Gathering colors its easy to have positive associations with the color white, but white does not mean good. The properties and skills described in the parent post are prosocial and it is sensible to have system that places great value in these things. But white can also be evil and things that white calls evil are not neccesarily so.
In Dungeons and Dragons one might play a character that is of the aligment Evil. But then every character is the hero of their story and as the player one has to wonder what kind of psychological principles goes into how the character chooses. To me essentially an evil character is living for themselfs. If their win condition is also another beings lose condition they choose to win.
In Babylon 5 the more shady side has the theme of asking their negotiating partners “What do you want?” and then either pointing out a course of action that gets then that or offering a deal that gets them that. On its face this seems neutral and even like a definition of moral contemplation. However this gets antagonistic shades in that often the cost of the deal is a great destruction or betrayal. And when not offering deals but pointing out a way to get the outcome acting in that way has great externalities for other actors. The logic goes something like “the thing that you want it possible and in your power”, “You do not choose to receive that outcome”, “So do you actually want the thing or not?”,”Probably not because its turned down”. This can kind of bait universe occupants to form a more narrow will than they otherwise would “Yes, I actually do want the thing and will bite whatever bullet”.
In this kind of “black morality” there is a corresponding skill of “being effective” in being more aware whether your actions are furthering your interest in contrast of what other ask and care about. If you know you get what you want and don’t know what your effect is on others Black is perfectly happy ot be effective. In contrast if White knows that others are not harmed and doesn’t know what they want out of life White is perfectly happy to be safe and inoffensive. Ofcourse with increased awereness less details are left to ignorance and more are under the influence of concious choice.
In Upload they live in a world where automobiles have a setting of “protect occupant”or “protect pedestrian”. I think making this choice is good but I don’t know whether one option or the other can be condenmend. In particular I am not sure it is proper to try to make people choose “protect others”. Like forbidding self-preservation is not a good idea. But people should trade their preservation against other goods. But it should be their choice.
But yeaht the poitn was that “good citizen” is separate from “good person” and moral progress can look like deconstructing bits where you are by ignorance or accident prosocial. Or rather than being a balance between self and others, caring-about-self and caring-about-others can be skills that can be strong together. But suppressing or dismissing caring-about-self is seldom productive. Its more that the opportunity cost of skipping on growing self-aweress is usually sensible in furthering the more rare caring-about-others.
I think the following two propositions are different. 1. “Quick decisions are not made by something ‘self-like’”. 2. “Quick decisions are made in a way that has nothing to do with your ethics.” #1 is probably true, at least if “quick” is quick enough and “self-like” is narrow enough. But #2 doesn’t follow from it. In the sort of situation you describe, for sure you won’t be pondering the ethics of the situation—but that doesn’t mean that the lower-level systems that decide your actions have nothing to do with your ethics.
I don’t know for sure whether they do. (I wonder whether there’s been research on this.) But the following (related) things seem at least plausible to me. I am not claiming that any specific one is true, only that they all seem plausible, and that to whatever extent they’re true we should expect rapid decisions and ethics to be related.
Part of what “being good” is is having the bits of your brain that generate plans be less inclined to generate plans that involve harming people.
Part of what “being good” is is having what happens to other people be more salient, relative to what happens to you, in your internal plan-generating-and-assessing machinery.
[clarification: I don’t mean that being good means having other people matter more than yourself, I mean that since almost all of us notice our own interests much more readily than others, noticing others’ interests more generally goes along with being more “good”.]
If you try to make yourself a better person, part of what you’re trying to do is to retrain your mental planning machinery to pay more attention to other people.
It is in fact possible to do this: higher-level bits of you have some ability to reshape lower-level bits. (A trained artist sees the world differently from muggles.)
Even if in fact it’s not possible to do this, people for whom what happens to other people is more salient tend to become people who are “good” because when they reflect on what they care about, that’s what they see.
The more you care about other people’s welfare, the more the (somewhat conscious) process of learning how to drive (or how to do other things in which you might sometimes have quick decisions to make that affect others) will be directed towards not harming others.
As a separate but related point, avoiding accidents in which other people get hurt is not just a matter of what you do at the last moment. It’s also about what situations you, er, steer yourself towards on longer timescales. For instance, caring about the welfare of the driver in front of you will make you less inclined to drive very close behind them (because doing so will make them uncomfortable and may put them at more risk if something unexpected happens), which will bias those split-second decisions away from ones where more of the options involve colliding with the vehicle in front of you. And that is absolutely the sort of thing that autopilot systems can have different “opinions” about.
There was actually an example of that just recently. Some bit of Tesla’s automated driving stuff (I’m not sure whether it’s the ill-named “full self-driving”, or something that’s present on all their cars; I think the former) has three settings called something like “cautious”, “normal”, and “assertive”. If you select “assertive”, then when approaching a stop sign the car will not necessarily attempt to stop; rather, if it doesn’t detect other vehicles nearby it will slow down but keep going past the sign. It turns out that this is illegal (in the US; probably in other jurisdictions too, but the US is the one I heard about) and Tesla have just announced a recall[1] of tens of thousands of vehicles to make it not happen any more. Anyway, since Tesla’s ability to detect other vehicles is unfortunately less than perfect, this is a “strategic” choice that when made makes “tactical” emergency decisions more likely and more likely to involve harm to other people.
[1] Although this is the word they use, I think all it means in practice is that they’re notifying people with those cars and doing an over-the-air update to disable this behaviour.
This is besides the point, but your example about Tesla rolling back their autopilot assertiveness to comply with the law made me realize a hidden risk of automation: it makes laws actually enforceable. The perhaps most important civilizational defense against bureaucracy clogging up its arteries, common-sense non-compliance, is being removed. This is a terrible precedent and made self-driving technology much less appealing to me all of a sudden.
There are some sense here that tend to be entangled that can be hard to tear appart. This might be shooting of in a tangent.
In Magic the Gathering colors its easy to have positive associations with the color white, but white does not mean good. The properties and skills described in the parent post are prosocial and it is sensible to have system that places great value in these things. But white can also be evil and things that white calls evil are not neccesarily so.
In Dungeons and Dragons one might play a character that is of the aligment Evil. But then every character is the hero of their story and as the player one has to wonder what kind of psychological principles goes into how the character chooses. To me essentially an evil character is living for themselfs. If their win condition is also another beings lose condition they choose to win.
In Babylon 5 the more shady side has the theme of asking their negotiating partners “What do you want?” and then either pointing out a course of action that gets then that or offering a deal that gets them that. On its face this seems neutral and even like a definition of moral contemplation. However this gets antagonistic shades in that often the cost of the deal is a great destruction or betrayal. And when not offering deals but pointing out a way to get the outcome acting in that way has great externalities for other actors. The logic goes something like “the thing that you want it possible and in your power”, “You do not choose to receive that outcome”, “So do you actually want the thing or not?”,”Probably not because its turned down”. This can kind of bait universe occupants to form a more narrow will than they otherwise would “Yes, I actually do want the thing and will bite whatever bullet”.
In this kind of “black morality” there is a corresponding skill of “being effective” in being more aware whether your actions are furthering your interest in contrast of what other ask and care about. If you know you get what you want and don’t know what your effect is on others Black is perfectly happy ot be effective. In contrast if White knows that others are not harmed and doesn’t know what they want out of life White is perfectly happy to be safe and inoffensive. Ofcourse with increased awereness less details are left to ignorance and more are under the influence of concious choice.
In Upload they live in a world where automobiles have a setting of “protect occupant”or “protect pedestrian”. I think making this choice is good but I don’t know whether one option or the other can be condenmend. In particular I am not sure it is proper to try to make people choose “protect others”. Like forbidding self-preservation is not a good idea. But people should trade their preservation against other goods. But it should be their choice.
But yeaht the poitn was that “good citizen” is separate from “good person” and moral progress can look like deconstructing bits where you are by ignorance or accident prosocial. Or rather than being a balance between self and others, caring-about-self and caring-about-others can be skills that can be strong together. But suppressing or dismissing caring-about-self is seldom productive. Its more that the opportunity cost of skipping on growing self-aweress is usually sensible in furthering the more rare caring-about-others.