Merely having the inability to regret an occurrence doesn’t make the occurrence coincide with one’s preferences. I couldn’t regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don’t prefer one.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
...”fulfilling”? Wire-heading only fulfills “make me happy”—it doesn’t fulfill any other goal that a person may have.
“Fulfilling”—in the sense of “To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate” (Webster 1913) - is precisely what wire-heading cannot do.
Your other goals are immaterial and pointless to the outside world.
Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI’s mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.
In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.
You’d probably be better off just being happy and sharing in the FAI’s infinite wisdom.
Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:
Your other goals are immaterial and pointless to the outside world.
I reject this point. Let me give a concrete example.
Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am—both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.
Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.
Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize Xto the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
If you seek personal growth, then the AI already is everything you can aspire to be.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
But everything you do is temporary. All the results you get from it are temporary.
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that’s your considered and stable desire, I say go for it. Have a blast. Just don’t drag us into it.
Would you be in favor of releasing this virus?
No. I’d be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.
I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I’d want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
You’re twisting my words. I said that FAI paternalism would be different—which it would be, qualitatively and quantitatively. “Pure in intent and flawless in execution” are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
Would you be in favor of releasing this virus?
I’m with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can’t be made into a non-contagious virus, I would personally not release it, and I’m going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that’s able to make unbiased (or much less biased, if you prefer; there’s a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.
I understand. That makes some sense. Though smokers’ judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.
We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.
This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don’t want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You’d release the virus?
Merely having the inability to regret an occurrence doesn’t make the occurrence coincide with one’s preferences. I couldn’t regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don’t prefer one.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
Would you be in favor of releasing this virus?
...”fulfilling”? Wire-heading only fulfills “make me happy”—it doesn’t fulfill any other goal that a person may have.
“Fulfilling”—in the sense of “To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate” (Webster 1913) - is precisely what wire-heading cannot do.
Your other goals are immaterial and pointless to the outside world.
Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI’s mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.
In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.
You’d probably be better off just being happy and sharing in the FAI’s infinite wisdom.
Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:
I reject this point. Let me give a concrete example.
Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am—both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.
What is the DAI response to this?
An FAI-enhanced World of Warcraft?
You can still interact with others even though you’re in a vat.
Though as I commented elsewhere, chances are that FAI could fabricate more engaging companions for you than mere human beings.
And chances are that all this is inferior to being the ultimate wirehead.
That could be fairly awesome.
If it comes to that, I could see making the compromise.
This relates to subjects discussed in the other thread—I’ll let that conversation stand in for my reply to it.
Well...
Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.
Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize X to the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that’s your considered and stable desire, I say go for it. Have a blast. Just don’t drag us into it.
No. I’d be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.
Apologies, Alicorn—I was confusing you with Adelene. I was paying all attention to the content and not enough to who is the author.
Only the first paragraph (but wire-heading is not death) is directed at your comment. The rest is actually directed at Adelene.
My point was that you used “you won’t regret it” as a point in favor of wireheading, whereas it does not serve as a point in favor of death.
Can you check the thread of this comment:
http://lesswrong.com/lw/1o9/welcome_to_heaven/1iia?context=3#comments
and let me know what your response to that thread is?
I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I’d want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.
You’re twisting my words. I said that FAI paternalism would be different—which it would be, qualitatively and quantitatively. “Pure in intent and flawless in execution” are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.
I’m with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can’t be made into a non-contagious virus, I would personally not release it, and I’m going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that’s able to make unbiased (or much less biased, if you prefer; there’s a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.
I understand. That makes some sense. Though smokers’ judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.
We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.
This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don’t want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You’d release the virus?