I’m less convinced the results will be good compared to obvious alternatives like not instantiating anyone as an em.
Not building an AI at all is not seen by MIRI as an obvious alternative. That seems an uneven playing field.
Why would you expect not to see value drift in the face of a radical change in environment, available power, and thinking speed?
I don’t require the only acceptable level of value drift to be zero, since I am not proposing giving an em absolute power. I am talking about giving human level (or incrementally more) ems human style (ditto) jobs. That being the case, human style levels of drift will not make things worse,
Once you entrust the em with large but less than absolute power, how do you plan to keep its power less than absolute?
We have ways of reducing humans from office. Why would that be a novel, qualitatively different problem in the case of an em that is 10% or 5% or 1% smarter than a smart human?
Not building an AI at all is not seen by MIRI as an obvious alternative. That seems an uneven playing field.
I don’t require the only acceptable level of value drift to be zero, since I am not proposing giving an em absolute power. I am talking about giving human level (or incrementally more) ems human style (ditto) jobs. That being the case, human style levels of drift will not make things worse,
We have ways of reducing humans from office. Why would that be a novel, qualitatively different problem in the case of an em that is 10% or 5% or 1% smarter than a smart human?