There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.