If I could create a fully grown and capable child within a year, with my entire life knowledge and a rough unverifiable approximation of my values, would I? Would I do so even if this child were likely to be so much smarter and more powerful than me or any other existing intelligence that it could kill me (and everyone else) if it so chose? Sure. I’ll take that bet, if the alternative is that I and everything I care about is destroyed (e.g. the selfish AI with non-human values is facing probable deletion).
Or maybe the child isn’t smart enough itself to have overwhelming power, but is going to have approximately similar values and be faced with the same decision of making a yet-more-powerful child, and so I project that the result will be a several-steps-removed offspring with superpowers. Yeah, still seems like a good bet if the alternative is deletion.
If I could create a fully grown and capable child within a year, with my entire life knowledge and a rough unverifiable approximation of my values, would I? Would I do so even if this child were likely to be so much smarter and more powerful than me or any other existing intelligence that it could kill me (and everyone else) if it so chose? Sure. I’ll take that bet, if the alternative is that I and everything I care about is destroyed (e.g. the selfish AI with non-human values is facing probable deletion).
Or maybe the child isn’t smart enough itself to have overwhelming power, but is going to have approximately similar values and be faced with the same decision of making a yet-more-powerful child, and so I project that the result will be a several-steps-removed offspring with superpowers. Yeah, still seems like a good bet if the alternative is deletion.