Super smart people are 10 a penny. But for every genius working to make AGI safer, there’s 10 working to bring AGI sooner. Adding more intelligent people to the mix is just as likely to hinder as to harm.
More concretely, if we were to clone Paul Christiano what’s the chance the clone would work on AGI safety research? What’s the chance it would work on something neutral? What’s the chance it would work on something counterproductive?
And how much would it cost?
Seems like it would be a much better use of resources to offer existing brilliant AI researchers million dollar a year salaries to work on AGI safety specifically.
You ask a number of good questions here, but the crucial point to me is that they are still questions. I agree it seems, based on my intuitions of the answers, like this isn’t the best path. But ‘how much would it cost’ and ‘what’s the chance a clone works on something counterproductive’ are, to me, not an argument against cloning, but rather arguments for working out how to answer those questions.
Also very ironic if we can’t even align clones and that’s what gets us.
This seems like the sort of thing that would be expensive to investigate, has low potential upside and just investigating would have enormous negatives (think loss of wierdness point, and potential for scandal).
The authors administered inventories of vocational and recreational interests and talents to 924 pairs of twins who had been reared together and to 92 pairs separated in infancy and reared apart. Factor analysis of all 291 items yielded 39 identifiable factors and 11 superfactors. The data indicated that about 50% of interests variance (about two thirds of the stable variance) was associated with genetic variation.
At a guess reducing interest variance to a single number is inappropriate. For example I imagine the correlation between twins both liking maths is much higher than them both being interested in a specific branch of maths.
In this particular case I think the clone is far more like to be interested in AI or philanthropy in general than the particular cross section of the two that is AI safety research.
I hope someone has taken seriously the idea of just paying top researchers a million a year to work on safety instead of capability. The last several times ‘pay for top talent’ was made in the context of ea in general very unconvincing excuses were given not to.
Super smart people are 10 a penny. But for every genius working to make AGI safer, there’s 10 working to bring AGI sooner. Adding more intelligent people to the mix is just as likely to hinder as to harm.
More concretely, if we were to clone Paul Christiano what’s the chance the clone would work on AGI safety research? What’s the chance it would work on something neutral? What’s the chance it would work on something counterproductive?
And how much would it cost?
Seems like it would be a much better use of resources to offer existing brilliant AI researchers million dollar a year salaries to work on AGI safety specifically.
You ask a number of good questions here, but the crucial point to me is that they are still questions. I agree it seems, based on my intuitions of the answers, like this isn’t the best path. But ‘how much would it cost’ and ‘what’s the chance a clone works on something counterproductive’ are, to me, not an argument against cloning, but rather arguments for working out how to answer those questions.
Also very ironic if we can’t even align clones and that’s what gets us.
This seems like the sort of thing that would be expensive to investigate, has low potential upside and just investigating would have enormous negatives (think loss of wierdness point, and potential for scandal).
From this study from 1993,
It should be noted that these sorts of studies likely underestimate heritability, due to measurement error. See e.g. this for more info.
At a guess reducing interest variance to a single number is inappropriate. For example I imagine the correlation between twins both liking maths is much higher than them both being interested in a specific branch of maths.
In this particular case I think the clone is far more like to be interested in AI or philanthropy in general than the particular cross section of the two that is AI safety research.
I hope someone has taken seriously the idea of just paying top researchers a million a year to work on safety instead of capability. The last several times ‘pay for top talent’ was made in the context of ea in general very unconvincing excuses were given not to.