Which is exactly “refining the AI’s values through interactions with human decision makers, who answer questions about edge cases and examples and serve as “learned judges” for the AI’s concepts”.
Hmm, I read the original post rather quickly, so I actually missed the fact that the analogy was supposed to map to value loading. I mistakenly assumed that this was about how to ban/regulate AGI while still allowing more narrow AI.
Which is exactly “refining the AI’s values through interactions with human decision makers, who answer questions about edge cases and examples and serve as “learned judges” for the AI’s concepts”.
Hmm, I read the original post rather quickly, so I actually missed the fact that the analogy was supposed to map to value loading. I mistakenly assumed that this was about how to ban/regulate AGI while still allowing more narrow AI.