In none of those cases can (or should) the power differential be removed.
I agree—in any situation where a higher-power individual feels that they have a duty to care for the wellbeing of a lower-power individual, “removing the power differential” ends up meaning abandoning that duty.
However, in the question of consent specifically, I think it’s reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly. Having the more-powerful party consider what they’d want if they were in the exact situation of the less-powerful party (including having all the same preferences, experiences, etc) creates what I’d consider a maximally fair negotiation.
when another entitity is smarter and more powerful than me, how do I want it to think of “for my own good”?
I would want a superintelligence to imagine that it was me, as accurately as it could, and update that model of me whenever my behavior deviates from the model. I’d then like it to run that model at an equivalent scale and power to itself (or a model of itself, if we’re doing this on the cheap) and let us negotiate as equals. To me, equality feels like a good-faith conversation of “here’s what I want, what do you want, how can we get as close as possible to maximizing both?”, and I want the chance to propose ways of accomplishing the superintelligence’s goals that are maximally compatible with me also accomplishing my own.
Then again, the concept of a superintelligence focusing solely on what’s for my individual good kind of grosses me out. I prefer the idea of it optimizing for a lot of simultaneous goods—the universe,the species, the neighborhood,the individual—and explaining who else’s good won and why if I inquire about why my individual good wasn’t the top priority in a given situation.
I think it’s reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly
I think that’s reasonable too, but for moral/legal discussions, “reasonable” is a difficult standard to apply. The majority of humans are unreasonable on at least some dimensions, and a lot of humans are incapable of modeling others particularly well. And there are a lot of humans who are VERY hard to model, because they really aren’t motivated the way we expect they “should” be, and “what they want” is highly indeterminate. Young children very often fall into this category.
What’s the minimum amount of fidelity a model should have before abandonment is preferred? I don’t know.
I agree—in any situation where a higher-power individual feels that they have a duty to care for the wellbeing of a lower-power individual, “removing the power differential” ends up meaning abandoning that duty.
However, in the question of consent specifically, I think it’s reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly. Having the more-powerful party consider what they’d want if they were in the exact situation of the less-powerful party (including having all the same preferences, experiences, etc) creates what I’d consider a maximally fair negotiation.
I would want a superintelligence to imagine that it was me, as accurately as it could, and update that model of me whenever my behavior deviates from the model. I’d then like it to run that model at an equivalent scale and power to itself (or a model of itself, if we’re doing this on the cheap) and let us negotiate as equals. To me, equality feels like a good-faith conversation of “here’s what I want, what do you want, how can we get as close as possible to maximizing both?”, and I want the chance to propose ways of accomplishing the superintelligence’s goals that are maximally compatible with me also accomplishing my own.
Then again, the concept of a superintelligence focusing solely on what’s for my individual good kind of grosses me out. I prefer the idea of it optimizing for a lot of simultaneous goods—the universe,the species, the neighborhood,the individual—and explaining who else’s good won and why if I inquire about why my individual good wasn’t the top priority in a given situation.
I think that’s reasonable too, but for moral/legal discussions, “reasonable” is a difficult standard to apply. The majority of humans are unreasonable on at least some dimensions, and a lot of humans are incapable of modeling others particularly well. And there are a lot of humans who are VERY hard to model, because they really aren’t motivated the way we expect they “should” be, and “what they want” is highly indeterminate. Young children very often fall into this category.
What’s the minimum amount of fidelity a model should have before abandonment is preferred? I don’t know.