What this is not: an exhortation to dig up Von Neumann and clone him
What this is: an exhortation to think seriously about how to decide whether to dig up Von Neumann and clone him
Having tried to clarify that I am not presently advocating for cloning, and am far more concerned with the meta level than the object level, let me now sink to the object level.
The argument for cloning
Suppose everything Eliezer Yudkowsky said in his recent discussions on Artificial General Intelligence (AGI) is entirely literally true, and we take it seriously.
Then right now we are a few decades (or perhaps years) away from the development of an unaligned AGI, which will swiftly conclude that humans are an unnecessary risk for whatever it wants.
The odds are not great, and it will take a miracle scientific breakthrough to save us. The best we can do is push as hard as we can to increase the possibility of that miracle breakthrough and our ability to make use of.
The claim I really want to draw attention to is this particular chestnut:
Paul Christiano is trying to have real foundational ideas, and they’re all wrong, but he’s one of the few people trying to have foundational ideas at all; if we had another 10 of him, something might go right.
Well, why not? 10 of him, 10 Von Neumanns, and an extra Jaynes or two just in case. If we have a few decades, we have just enough time.
Leave aside all the obvious ways this could go horribly wrong, the risks and the costs: if what Eleizer says is true, it is conceivable that cloning could save the world.
The argument for the argument about clones
I am not saying this is a good idea, but I am saying it could be a good idea. And if it is a good idea, it would be really great to know that.
There are obvious problems, but I don’t feel well-placed to judge how high the benefits might be, or how easily the problems might be resolved. And I am sufficiently uncertain in this regard that I think it might be valuable to become more certain.
A modest proposal?
I think OpenPhil, or somebody else in that sphere, should fund or conduct an investigation into cloning. Most basically, how much would it actually cost to clone somebody, and how much impact could that potentially have?
It may become immediately clear that this is prohibitively expensive relative to just hiring more of the very smartest people in the world, but if the limiting factor is not money then I think it would then become worthwhile to ask: what actually are the ethical ramifications of cloning? could it be done?
I would like to think that sufficiently neutral and exploratory research could be done in public, but it is conceivable that this must be conducted in secret.
As I see it, the most obvious types of arguments against this are:
somehow it is just obviously the case that, no matter what, this couldn’t conceivably turn out to be high impact (which I find quite unlikely)
the opportunity cost of even investigating this is too high (which would surprise me but I would begrudgingly accept)
such a study already exists, in secret (in which case I apologise)
such a study already exists, in public (in which case I apologise even more, and would love to be directed to it)
Taking Clones Seriously
What this is not: an exhortation to dig up Von Neumann and clone him
What this is: an exhortation to think seriously about how to decide whether to dig up Von Neumann and clone him
Having tried to clarify that I am not presently advocating for cloning, and am far more concerned with the meta level than the object level, let me now sink to the object level.
The argument for cloning
Suppose everything Eliezer Yudkowsky said in his recent discussions on Artificial General Intelligence (AGI) is entirely literally true, and we take it seriously.
Then right now we are a few decades (or perhaps years) away from the development of an unaligned AGI, which will swiftly conclude that humans are an unnecessary risk for whatever it wants.
The odds are not great, and it will take a miracle scientific breakthrough to save us. The best we can do is push as hard as we can to increase the possibility of that miracle breakthrough and our ability to make use of.
The claim I really want to draw attention to is this particular chestnut:
Well, why not? 10 of him, 10 Von Neumanns, and an extra Jaynes or two just in case. If we have a few decades, we have just enough time.
Leave aside all the obvious ways this could go horribly wrong, the risks and the costs: if what Eleizer says is true, it is conceivable that cloning could save the world.
The argument for the argument about clones
I am not saying this is a good idea, but I am saying it could be a good idea. And if it is a good idea, it would be really great to know that.
There are obvious problems, but I don’t feel well-placed to judge how high the benefits might be, or how easily the problems might be resolved. And I am sufficiently uncertain in this regard that I think it might be valuable to become more certain.
A modest proposal?
I think OpenPhil, or somebody else in that sphere, should fund or conduct an investigation into cloning. Most basically, how much would it actually cost to clone somebody, and how much impact could that potentially have?
It may become immediately clear that this is prohibitively expensive relative to just hiring more of the very smartest people in the world, but if the limiting factor is not money then I think it would then become worthwhile to ask: what actually are the ethical ramifications of cloning? could it be done?
I would like to think that sufficiently neutral and exploratory research could be done in public, but it is conceivable that this must be conducted in secret.
As I see it, the most obvious types of arguments against this are:
somehow it is just obviously the case that, no matter what, this couldn’t conceivably turn out to be high impact (which I find quite unlikely)
the opportunity cost of even investigating this is too high (which would surprise me but I would begrudgingly accept)
such a study already exists, in secret (in which case I apologise)
such a study already exists, in public (in which case I apologise even more, and would love to be directed to it)