The model is this: assume that if an AI is created, it’s because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously.
I contest this use of the term “safety”. If your goal is for humanity to survive, say that your goal is for humanity to survive. Not to “promote safety”.
“Safety” means avoiding certain bad outcomes. By using the word “safety”, you’re trying to sneak past us the assumption “humans remaining the dominant lifeform = good, humans not remaining dominant = bad”.
The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.
What is value? What things are valuable, and what are not?
Everything that we know about value, everything that we can know, is encoded within the current state of humanity.
As long as that knowledge remains, there is hope for the Best Possible Future. It may be a future that includes no humans, but it will be a future based on that knowledge.
If that knowledge is destroyed, or it loses power since it is no longer riding inside the dominant life form, then the future will be, morally, as chaos—as likely to eat babies as to love them.
To figure out how we can contribute to the future, what should replace us, and so on, takes time. Time we do not have if we do not focus on safety first.
Well, our distant descendants, whether uploads or cyborgs or other life-forms, could be considered part of “generalized humanity”, as long as they retain what humans have that is valuable.
And regardless, we certainly want current humanity (that is, all the people alive now) to survive, in the sense of not being killed by the AI.
My point being, it’s not necessarily right to take “the survival of humanity” to mean that we have to retain this physical form, and I don’t think the OP was using the words in that sense.
“Safety” means avoiding certain bad outcomes. By using the word “safety”, you’re trying to sneak past us the assumption “humans remaining the dominant lifeform = good, humans not remaining dominant = bad”.
The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.
Agreed. People seem to get hold of the idea that humans are good, and machines are bad, and then get into an us vs them mindset. Surely all the best possible futures involve an engineered world, where the agony of being a meat brained human who was cobbled together by natural selection is mostly a distant memory.
But we have to keep the humans around until humans are capable of engineering that world carefully and without screwing it up. If we don’t engineer it, who will?
Right. There are pretty good instrumental reasons for all the parties concerned to do that. Humans may also be useful for a while for rebooting the system—if there is a major setback. They have successfully booted things up once already. Other backup systems are likely to be less well tested.
I contest this use of the term “safety”. If your goal is for humanity to survive, say that your goal is for humanity to survive. Not to “promote safety”.
“Safety” means avoiding certain bad outcomes. By using the word “safety”, you’re trying to sneak past us the assumption “humans remaining the dominant lifeform = good, humans not remaining dominant = bad”.
The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.
What is value? What things are valuable, and what are not?
Everything that we know about value, everything that we can know, is encoded within the current state of humanity.
As long as that knowledge remains, there is hope for the Best Possible Future. It may be a future that includes no humans, but it will be a future based on that knowledge.
If that knowledge is destroyed, or it loses power since it is no longer riding inside the dominant life form, then the future will be, morally, as chaos—as likely to eat babies as to love them.
To figure out how we can contribute to the future, what should replace us, and so on, takes time. Time we do not have if we do not focus on safety first.
Well, our distant descendants, whether uploads or cyborgs or other life-forms, could be considered part of “generalized humanity”, as long as they retain what humans have that is valuable.
And regardless, we certainly want current humanity (that is, all the people alive now) to survive, in the sense of not being killed by the AI.
My point being, it’s not necessarily right to take “the survival of humanity” to mean that we have to retain this physical form, and I don’t think the OP was using the words in that sense.
Agreed. People seem to get hold of the idea that humans are good, and machines are bad, and then get into an us vs them mindset. Surely all the best possible futures involve an engineered world, where the agony of being a meat brained human who was cobbled together by natural selection is mostly a distant memory.
But we have to keep the humans around until humans are capable of engineering that world carefully and without screwing it up. If we don’t engineer it, who will?
Right. There are pretty good instrumental reasons for all the parties concerned to do that. Humans may also be useful for a while for rebooting the system—if there is a major setback. They have successfully booted things up once already. Other backup systems are likely to be less well tested.