Ah, sorry—I missed the acausal assumption in the post. I generally ignore such those explorations, as I don’t think “decision” is the right word without causality and conditional probability.
I think you’re right that cognitive transparency is a crux. I strongly doubt it’s possible to be mutual, or possible between agents near each other in cognitive power. It may be possible for a hyperintelligence to understand/predict a human-level intelligence, but in that case the human is so outclassed that “trade” is the wrong word, and “manipulation” or “slavery” (or maybe “absorption”) is a better model.
You don’t have to be able to simulate something to trust it for this or that. EG, the specification of alphazero is much simpler than the final weights, and knowing its training process, without knowing its weights, you can still trust that it will never, say, take a bribe to throw a match. Even if it comprehended bribery, we know from its spec info that it’s solely interested in winning whatever match it’s currently playing, and no sum would be enough.
To generalize, if we know something’s utility function, and if we know it had a robust design, even if we know nothing else about its history, we know what it’ll do.
A promise-keeping capacity is a property utility functions can have.
A promise-keeping capacity is a property utility functions can have.
Yeah, definitely cruxy. It may be a property that utility functions could have, but it’s not a property that any necessarily do have. Moreover, we have zero examples of robust-designed agents with known utility functions, so it’s extremely unclear whether that will become the norm, let alone the universal assumption.
Ah, sorry—I missed the acausal assumption in the post. I generally ignore such those explorations, as I don’t think “decision” is the right word without causality and conditional probability.
I think you’re right that cognitive transparency is a crux. I strongly doubt it’s possible to be mutual, or possible between agents near each other in cognitive power. It may be possible for a hyperintelligence to understand/predict a human-level intelligence, but in that case the human is so outclassed that “trade” is the wrong word, and “manipulation” or “slavery” (or maybe “absorption”) is a better model.
You don’t have to be able to simulate something to trust it for this or that. EG, the specification of alphazero is much simpler than the final weights, and knowing its training process, without knowing its weights, you can still trust that it will never, say, take a bribe to throw a match. Even if it comprehended bribery, we know from its spec info that it’s solely interested in winning whatever match it’s currently playing, and no sum would be enough.
To generalize, if we know something’s utility function, and if we know it had a robust design, even if we know nothing else about its history, we know what it’ll do.
A promise-keeping capacity is a property utility functions can have.
Yeah, definitely cruxy. It may be a property that utility functions could have, but it’s not a property that any necessarily do have. Moreover, we have zero examples of robust-designed agents with known utility functions, so it’s extremely unclear whether that will become the norm, let alone the universal assumption.