I didn’t watch the full video, but does he actually propose how human beings should be made more docile and intelligent? I don’t mean a technical method, but rather a political method of ensuring that most of humanity gets these augmentations. This is borderline impossible in a liberal democracy. I think this explains why programming an AI is a more practical approach.
Consider how many people are furious because they believe that fluoridated water turns people into docile consumers, or that vaccines give kids autism. Now imagine actually trying to convince people that the government should be allowed to mess around with their brains.
And if the government doesn’t mandate it, then the most aggressive and dangerous people will simply opt out.
This is borderline impossible in a liberal democracy… Now imagine actually trying to convince people that the government should be allowed to mess around with their brains
In the Q&A at 15:30, he opines that it will take the first technologically enabled act of mass terrorism to persuade people. I agree: I don’t think anything will get done on x-risks until there’s a gigadeath event.
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection—perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I’m starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don’t quote me on that. Or whatever, go ahead.
Not just “rotten eggs” either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become “criminals”, it is mandating gov’t meddling with their brains. I, for example, don’t use alcohol or any other recreational drug, I don’t use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
On the topic of shingles, shingles is associated with depression. Should I ask my GP for the vaccine for prevention given that I live in Australia, have had chickenpox, but haven’t had shingles?
You don’t have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I’m just like that myself actually. Some people wouldn’t be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it’s safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren’t capable of grasping it, would oppose it strongly—possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn’t even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.
I certainly haven’t supported it; that was the kind of scenario I had in mind, though. Whether cognitive enhancement alone is enough is another debate entirely.
My personal suspicion, and what motivates me to think that IA is a good idea, is that the human race is facing a massive commons problem with respect to AGI. Realizing that there is a problem requires a lot of intelligence. If no one, or very few, realize that something is wrong, then it is unlikely that anything will be done about it. If this is the case, it doesn’t matter how much time we have: if there’s little support for the project of managing the future, little money and little manpower, then even a century or a millennium is not long enough.
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic.
Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right.
Even if individuals have a high IQ that doesn’t mean that they don’t fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence.
Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Christian, FAI is hard because it doesn’t necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That’s why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to “look before we jump”.
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared current brain science.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
The same that’s not to like about forcing anything on someone against their will because despite their protestations you believe it’s in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people’s stated goals are not in line with their own ‘best interests’. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to ‘help’ them against their will.
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they’re getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they’d rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they’d like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns.
Actually making someone aware of a danger they’re approaching is often easier said than done. People have a habit of disregarding things they don’t want to listen to. What’s that Douglas Adams quote? Something like, “Humans are remarkable among species both for having the ability to learn from others’ mistakes, and for their consistent disinclination to do so.”
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it).
It’s a completely different issue, actually.
...but there’s a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the “helping people against their will” issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn’t show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone’s lives? No—if only because that doesn’t generally work out very well. Yet, lacking a solution doesn’t make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn’t also be suspicious of such a proposition, but don’t pretend that opposing the idea is free. It’s not, so long as we’re all sharing this society.
Maybe you’re happy to pay the costs of allowing other people to make mistakes, but I’m not. It may very well be that the alternatives are worse, but that doesn’t make the situation any more pleasant.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren’t really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
Explaining to them why you believe they’re making a mistake is justified. Interfering if they choose to continue anyway, not.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
I don’t recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
Badly hung over, he doesn’t show up to work the next day and is fired from his job.
I don’t want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual’s choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn’t owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don’t think the case is strong enough to justify compulsion. It’s not something I have a great deal of interest in however so I haven’t looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I’m somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children—I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
I might be wrong in my beliefs about their best interests, but that is a separate issue.
It’s not a separate issue, it’s the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone’s best interests but we’re debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don’t believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone’s best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
It doesn’t have to radically transform their lives, if they wouldn’t want it to upon reflection. FAI ≠ enforced transhumanity.
Gene therapy of the type we do at the moment always works through a engineered virus.
But then as technique progresses you don’t have to be a nation state anymore to do genetical engineering.
A small group of super empowered individuals might be able to it.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
I suspect that once most people have had themselves or their children cognitively enhanced, you are in much better shape for dealing with the 10% of sticklers in a firm but fair way.
Those people don’t get jobs or university education that they would need to use the dangerous knowledge about how to manufacture artificial viruses because they aren’t smart enough in competition to the rest.
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers—possibly very efficiently due to our superior intelligence—rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
presumably you refer to the violation of individuals’ rights here—forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
But how would you feel if the first people to undergo the treatments were politicians; they might be enhanced so that they were incapable of lying. Think of the good that that could do.
My feeling is that if you rendered politicians incapable of lying it would be hard to distinguish from rendering them incapable of speaking.
If to become a politician you had to undergo some kind of process to enhance intelligence or honesty I wouldn’t necessarily object. Becoming a politician is a voluntary choice however and so that’s a very different proposition from forcing some kind of treatment on every member of society.
Simply using a lie detector for politicians might be a much better idea. It’s also much easier.
Of course a lie detector doesn’t really detect whether someone would be lying but the same goes for any cognitive enhancement.
presumably you refer to the violation of individuals’ rights here—forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
Out of curiosity, what do you have in mind here as “participate in society”?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don’t actually seem that much better. Hence your point about “the people who do get made smarter can figure it out”, I guess.
Your first link seems to be broken.
I didn’t watch the full video, but does he actually propose how human beings should be made more docile and intelligent? I don’t mean a technical method, but rather a political method of ensuring that most of humanity gets these augmentations. This is borderline impossible in a liberal democracy. I think this explains why programming an AI is a more practical approach. Consider how many people are furious because they believe that fluoridated water turns people into docile consumers, or that vaccines give kids autism. Now imagine actually trying to convince people that the government should be allowed to mess around with their brains. And if the government doesn’t mandate it, then the most aggressive and dangerous people will simply opt out.
In the Q&A at 15:30, he opines that it will take the first technologically enabled act of mass terrorism to persuade people. I agree: I don’t think anything will get done on x-risks until there’s a gigadeath event.
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection—perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I’m starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don’t quote me on that. Or whatever, go ahead.
Not just “rotten eggs” either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become “criminals”, it is mandating gov’t meddling with their brains. I, for example, don’t use alcohol or any other recreational drug, I don’t use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
On the topic of shingles, shingles is associated with depression. Should I ask my GP for the vaccine for prevention given that I live in Australia, have had chickenpox, but haven’t had shingles?
You don’t have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I’m just like that myself actually. Some people wouldn’t be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it’s safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren’t capable of grasping it, would oppose it strongly—possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn’t even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.
it might only be a small enhancement. +30 IQ points across the board would save the world, +30 to just you would not make much difference.
I find that claim highly dubious.
I certainly haven’t supported it; that was the kind of scenario I had in mind, though. Whether cognitive enhancement alone is enough is another debate entirely.
30 additional points of intelligence for everzone could mean that AI gets developed sooner and therefore there less time for FAI research.
The same goes for biological research that might lead to biological weapons.
My personal suspicion, and what motivates me to think that IA is a good idea, is that the human race is facing a massive commons problem with respect to AGI. Realizing that there is a problem requires a lot of intelligence. If no one, or very few, realize that something is wrong, then it is unlikely that anything will be done about it. If this is the case, it doesn’t matter how much time we have: if there’s little support for the project of managing the future, little money and little manpower, then even a century or a millennium is not long enough.
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn’t mean that they don’t fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Christian, FAI is hard because it doesn’t necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That’s why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to “look before we jump”.
Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn’t smart enough to improve itself.
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Hopefully volunteers could be found; but in any case, the stakes here are the end of the world, the end justifies the means.
To whoever downvoted Roko’s comment—check out the distinction between these ideas:
One Life Against the World
Ends Don’t Justify Means (Among Humans)
I’d volunteer and I’m sure I’m not the only one here.
Heroes of the future sign up in this thread ;-)
You’re not, though I’m not sure I’d be an especially useful data source.
I’ve met at least one person who would like a synesthesia on-off switch for their brain—that would make your data useful right there.
Looks to me like that’d be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who’d like that, too.
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
I think that WBE stands a greater chance of precipitating a friendly singularity.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
I worry these modified ems won’t share our values to a sufficient extent.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
That’s correct. So why do I think it would help? What does the risk landscape as a function of population average Intelligence look like?
So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
The same that’s not to like about forcing anything on someone against their will because despite their protestations you believe it’s in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people’s stated goals are not in line with their own ‘best interests’. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to ‘help’ them against their will.
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they’re getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they’d rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they’d like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
Actually making someone aware of a danger they’re approaching is often easier said than done. People have a habit of disregarding things they don’t want to listen to. What’s that Douglas Adams quote? Something like, “Humans are remarkable among species both for having the ability to learn from others’ mistakes, and for their consistent disinclination to do so.”
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
It’s a completely different issue, actually.
...but there’s a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the “helping people against their will” issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn’t show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone’s lives? No—if only because that doesn’t generally work out very well. Yet, lacking a solution doesn’t make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn’t also be suspicious of such a proposition, but don’t pretend that opposing the idea is free. It’s not, so long as we’re all sharing this society.
Maybe you’re happy to pay the costs of allowing other people to make mistakes, but I’m not. It may very well be that the alternatives are worse, but that doesn’t make the situation any more pleasant.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren’t really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
Explaining to them why you believe they’re making a mistake is justified. Interfering if they choose to continue anyway, not.
I don’t recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
I don’t want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual’s choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn’t owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don’t think the case is strong enough to justify compulsion. It’s not something I have a great deal of interest in however so I haven’t looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I’m somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children—I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
It’s not a separate issue, it’s the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone’s best interests but we’re debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don’t believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone’s best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
It doesn’t have to radically transform their lives, if they wouldn’t want it to upon reflection. FAI ≠ enforced transhumanity.
Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don’t have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
I suspect that once most people have had themselves or their children cognitively enhanced, you are in much better shape for dealing with the 10% of sticklers in a firm but fair way.
I’m not sure quite what you’re advocating here but ‘dealing with the 10% of sticklers in a firm but fair way’ has very ominous overtones to me.
Those people don’t get jobs or university education that they would need to use the dangerous knowledge about how to manufacture artificial viruses because they aren’t smart enough in competition to the rest.
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers—possibly very efficiently due to our superior intelligence—rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
presumably you refer to the violation of individuals’ rights here—forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
But how would you feel if the first people to undergo the treatments were politicians; they might be enhanced so that they were incapable of lying. Think of the good that that could do.
I think I’d feel bad about the resulting fallout in the politicians’ home lives.
lol… ok, maybe you’d have to couple this with marriage counseling or whatever,
My feeling is that if you rendered politicians incapable of lying it would be hard to distinguish from rendering them incapable of speaking.
If to become a politician you had to undergo some kind of process to enhance intelligence or honesty I wouldn’t necessarily object. Becoming a politician is a voluntary choice however and so that’s a very different proposition from forcing some kind of treatment on every member of society.
Simply using a lie detector for politicians might be a much better idea. It’s also much easier. Of course a lie detector doesn’t really detect whether someone would be lying but the same goes for any cognitive enhancement.
Out of curiosity, what do you have in mind here as “participate in society”?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don’t actually seem that much better. Hence your point about “the people who do get made smarter can figure it out”, I guess.