So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
The same that’s not to like about forcing anything on someone against their will because despite their protestations you believe it’s in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people’s stated goals are not in line with their own ‘best interests’. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to ‘help’ them against their will.
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they’re getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they’d rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they’d like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns.
Actually making someone aware of a danger they’re approaching is often easier said than done. People have a habit of disregarding things they don’t want to listen to. What’s that Douglas Adams quote? Something like, “Humans are remarkable among species both for having the ability to learn from others’ mistakes, and for their consistent disinclination to do so.”
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it).
It’s a completely different issue, actually.
...but there’s a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the “helping people against their will” issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn’t show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone’s lives? No—if only because that doesn’t generally work out very well. Yet, lacking a solution doesn’t make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn’t also be suspicious of such a proposition, but don’t pretend that opposing the idea is free. It’s not, so long as we’re all sharing this society.
Maybe you’re happy to pay the costs of allowing other people to make mistakes, but I’m not. It may very well be that the alternatives are worse, but that doesn’t make the situation any more pleasant.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren’t really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
Explaining to them why you believe they’re making a mistake is justified. Interfering if they choose to continue anyway, not.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
I don’t recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
Badly hung over, he doesn’t show up to work the next day and is fired from his job.
I don’t want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual’s choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn’t owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don’t think the case is strong enough to justify compulsion. It’s not something I have a great deal of interest in however so I haven’t looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I’m somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children—I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
I might be wrong in my beliefs about their best interests, but that is a separate issue.
It’s not a separate issue, it’s the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone’s best interests but we’re debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don’t believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone’s best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
It doesn’t have to radically transform their lives, if they wouldn’t want it to upon reflection. FAI ≠ enforced transhumanity.
So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
The same that’s not to like about forcing anything on someone against their will because despite their protestations you believe it’s in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people’s stated goals are not in line with their own ‘best interests’. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to ‘help’ them against their will.
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they’re getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they’d rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they’d like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
Actually making someone aware of a danger they’re approaching is often easier said than done. People have a habit of disregarding things they don’t want to listen to. What’s that Douglas Adams quote? Something like, “Humans are remarkable among species both for having the ability to learn from others’ mistakes, and for their consistent disinclination to do so.”
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
It’s a completely different issue, actually.
...but there’s a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the “helping people against their will” issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn’t show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone’s lives? No—if only because that doesn’t generally work out very well. Yet, lacking a solution doesn’t make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn’t also be suspicious of such a proposition, but don’t pretend that opposing the idea is free. It’s not, so long as we’re all sharing this society.
Maybe you’re happy to pay the costs of allowing other people to make mistakes, but I’m not. It may very well be that the alternatives are worse, but that doesn’t make the situation any more pleasant.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren’t really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
Explaining to them why you believe they’re making a mistake is justified. Interfering if they choose to continue anyway, not.
I don’t recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
I don’t want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual’s choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn’t owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don’t think the case is strong enough to justify compulsion. It’s not something I have a great deal of interest in however so I haven’t looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I’m somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children—I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
It’s not a separate issue, it’s the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone’s best interests but we’re debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don’t believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone’s best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
It doesn’t have to radically transform their lives, if they wouldn’t want it to upon reflection. FAI ≠ enforced transhumanity.