You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize Xto the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
If you seek personal growth, then the AI already is everything you can aspire to be.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
But everything you do is temporary. All the results you get from it are temporary.
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize X to the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.