I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.