If I can find humans who are properly motivated, then I can produce uploads who are also motivated to work on the design of FAI.
It might be much easier to clone Yudkowsky a hundred times within the next 10 years, make them all read the sequences at some point and make each one focus on a different FAI related problem. By ~2040 we could have a hundred Yudkowsky’s working on FAI.
Why that route might be better than uploading:
It is feasible with current technology.
We already know that Yudkowsky is friendly and works.
There are no existential risks associated with cloning humans (only indirectly).
Overall I like this idea—its at the very least amusing, and much harder to think of ways in which its dangerous.
We already know that Yudkowsky is friendly and works.
We know that Yudkowsky as he experienced the life that he went through is friendly, I’m not so sure about 100 Yudkowskys raised differently. Never having met him, I won’t assume that there’s no possible way he could go wrong.
It might be much easier to clone Yudkowsky a hundred times within the next 10 years, make them all read the sequences at some point and make each one focus on a different FAI related problem. By ~2040 we could have a hundred Yudkowsky’s working on FAI.
Why that route might be better than uploading:
It is feasible with current technology.
We already know that Yudkowsky is friendly and works.
There are no existential risks associated with cloning humans (only indirectly).
Election promises.
Provably friendly?
Overall I like this idea—its at the very least amusing, and much harder to think of ways in which its dangerous.
We know that Yudkowsky as he experienced the life that he went through is friendly, I’m not so sure about 100 Yudkowskys raised differently. Never having met him, I won’t assume that there’s no possible way he could go wrong.
Probably too late, IMO. Eyeballing my graph I have maybe 15% probability mass out there.