Nick: For example, if it is morally preferred that the people who are currently alive get the chance to survive into the postsingularity world, then we would have to take this desideratum into account when deciding when and how hard to push for the singularity.
Eliezer: Not at all! If that is really and truly and objectively the moral thing to do, then we can rely on the Post-Singularity Entities to be bound by the same reasoning. If the reasoning is wrong, the PSEs won’t be bound by it. If the PSEs aren’t bound by morality, we have a REAL problem, but I don’t see any way of finding this out short of trying it.
Nick: Indeed. And this is another point where I seem to disagree with you. I am not at all certain that being superintelligent implies being moral. Certainly there are very intelligent humans that are also very wicked; I don’t see why once you pass a certain threshold of intelligence then it is no longer possible to be morally bad. What I might agree with, is that once you are sufficiently intelligent then you should be able to recognize what’s good and what’s bad. But whether you are motivated to act in accordance with these moral convictions is a different question.
Eliezer: Do you really know all the logical
consequences of placing a large value on human survival? Would you care to
define “human” for me? Oops! Thanks to your overly rigid definition, you
will live for billions and trillions and googolplexes of years, prohibited
from uploading, prohibited even from ameliorating your own boredom, endlessly
screaming, until the soul burns out of your mind, after which you will
continue to scream.
Nick: I think the risk of this happening is pretty slim and it can be made smaller through building smart safeguards into the moral system. For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.
Nick: How to contol a superintelligence? An interesting topic. I hope to write a paper on that during the Christmas holiday. [Unfortunately it looks like this paper was never written?]
I assume Bostrom called it something else.
He used “control”, which is apparently still his preferred word for the problem today, as in “AI control”.
This is fascinating, thank you! It feels like while Nick is pointing in the right direction and Eliezer in the wrong direction here, this is from a time before either of them have had the insights that bring us to seeing the problem in anything like the way we see it today. Large strides have been made by the time of the publication of CFAI three years later, but as Eliezer tells it in “coming of age” story, his “naturalistic awakening” isn’t till another couple of years after that.
Also, remember Elizier was only 20 years old at this time. I am the same age and had just started college then in 98. Bostrom was 25.
I find this interesting in particular:
For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.
They could be talking about a new government, rather than an AI.
In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer’s work and discussions with Eliezer have played in his own research and thinking over the course of the FHI’s work on AI safety.
Do you have the link for that or at least the keywords? I assume Bostrom called it something else.
See this 1998 discussion between Eliezer and Nick. Some relevant quotes from the thread:
Nick: For example, if it is morally preferred that the people who are currently alive get the chance to survive into the postsingularity world, then we would have to take this desideratum into account when deciding when and how hard to push for the singularity.
Eliezer: Not at all! If that is really and truly and objectively the moral thing to do, then we can rely on the Post-Singularity Entities to be bound by the same reasoning. If the reasoning is wrong, the PSEs won’t be bound by it. If the PSEs aren’t bound by morality, we have a REAL problem, but I don’t see any way of finding this out short of trying it.
Nick: Indeed. And this is another point where I seem to disagree with you. I am not at all certain that being superintelligent implies being moral. Certainly there are very intelligent humans that are also very wicked; I don’t see why once you pass a certain threshold of intelligence then it is no longer possible to be morally bad. What I might agree with, is that once you are sufficiently intelligent then you should be able to recognize what’s good and what’s bad. But whether you are motivated to act in accordance with these moral convictions is a different question.
Eliezer: Do you really know all the logical consequences of placing a large value on human survival? Would you care to define “human” for me? Oops! Thanks to your overly rigid definition, you will live for billions and trillions and googolplexes of years, prohibited from uploading, prohibited even from ameliorating your own boredom, endlessly screaming, until the soul burns out of your mind, after which you will continue to scream.
Nick: I think the risk of this happening is pretty slim and it can be made smaller through building smart safeguards into the moral system. For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.
Nick: How to contol a superintelligence? An interesting topic. I hope to write a paper on that during the Christmas holiday. [Unfortunately it looks like this paper was never written?]
He used “control”, which is apparently still his preferred word for the problem today, as in “AI control”.
This is fascinating, thank you! It feels like while Nick is pointing in the right direction and Eliezer in the wrong direction here, this is from a time before either of them have had the insights that bring us to seeing the problem in anything like the way we see it today. Large strides have been made by the time of the publication of CFAI three years later, but as Eliezer tells it in “coming of age” story, his “naturalistic awakening” isn’t till another couple of years after that.
Also, remember Elizier was only 20 years old at this time. I am the same age and had just started college then in 98. Bostrom was 25.
I find this interesting in particular:
They could be talking about a new government, rather than an AI.
Actually 19!
For those who haven’t been around as long as Wei Dai…
Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.
In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer’s work and discussions with Eliezer have played in his own research and thinking over the course of the FHI’s work on AI safety.