How will the AI behave when it is still gathering information and computing the CEV (or any other meta-level solution)? For example, in the case of CEV, won’t it pick the most efficient, not the rightest, method to scan brains, compute the CEV, etc?
Do we (need to) know what mechanism or knowledge the AI would need to approximate ethical behavior when it still doesn’t know exactly what friendliness means?
How will the AI behave when it is still gathering information and computing the CEV (or any other meta-level solution)? For example, in the case of CEV, won’t it pick the most efficient, not the rightest, method to scan brains, compute the CEV, etc?
Do we (need to) know what mechanism or knowledge the AI would need to approximate ethical behavior when it still doesn’t know exactly what friendliness means?
An excellent point.