Eliezer, you may not feel ready to be a father to a sentient AI, but do you agree that many humans are sufficiently ready to be fathers and mothers to ordinary human kids? Or do you think humans should stop procreating, for the sake of not creating beings that can suffer? Why care more about a not yet existing AI’s future suffering than about not yet existing human kids’ future suffering?
From a utilitarian perspective, initially allowing, for some years, suffering to occur in an AI that we build is a low prize to pay for making possible the utopia that future AI then may become able to build.
Eliezer, at some point you talk of our ethical obligations to AI as if you believed in rights, but elsewhere you have said you think you are an average utilitarian. Which is it? If you believe in rights only in the utilitarianism derived sense, don’t you think the “rights” of some initial AI can, for some time, be rightfully sacrificed for the utilitarian sake of minimising Existential Risk, given that that would indeed minimise Existential Risk? (Much like the risk of incidents of collateral damage in the form of killed and wounded civilians should be accepted in some wars (for example when “the good ones” had to kill some innocents in the process of fighting Hitler)?)
Isn’t the fundamentally more important question rather this: which one of creating sentient AI and creating nonsentient AI can be most expected to minimise Existential Risk?
Eliezer, you may not feel ready to be a father to a sentient AI, but do you agree that many humans are sufficiently ready to be fathers and mothers to ordinary human kids? Or do you think humans should stop procreating, for the sake of not creating beings that can suffer? Why care more about a not yet existing AI’s future suffering than about not yet existing human kids’ future suffering?
From a utilitarian perspective, initially allowing, for some years, suffering to occur in an AI that we build is a low prize to pay for making possible the utopia that future AI then may become able to build.
Eliezer, at some point you talk of our ethical obligations to AI as if you believed in rights, but elsewhere you have said you think you are an average utilitarian. Which is it? If you believe in rights only in the utilitarianism derived sense, don’t you think the “rights” of some initial AI can, for some time, be rightfully sacrificed for the utilitarian sake of minimising Existential Risk, given that that would indeed minimise Existential Risk? (Much like the risk of incidents of collateral damage in the form of killed and wounded civilians should be accepted in some wars (for example when “the good ones” had to kill some innocents in the process of fighting Hitler)?)
Isn’t the fundamentally more important question rather this: which one of creating sentient AI and creating nonsentient AI can be most expected to minimise Existential Risk?