It’s all a matter of risk aversion, which no matter how I slice it feels kind of like a terminal value to me. An agent that only accepted exactly zero risk would be paralysed. An agent that doesn’t risks making mistakes; the less risk averse, the bigger the potential mistakes. Part of aligning an AI is determining how much risk averse it should be.
It’s all a matter of risk aversion, which no matter how I slice it feels kind of like a terminal value to me. An agent that only accepted exactly zero risk would be paralysed. An agent that doesn’t risks making mistakes; the less risk averse, the bigger the potential mistakes. Part of aligning an AI is determining how much risk averse it should be.