It’s a minus if you’re trying to convince someone more results-oriented to keep giving you R&D funding. Imagine the budget meeting:
The EPA is breathing down our necks about venting a billion dollars worth of antimatter, you’ve learned literally nothing, and you consider that a good outcome?
If the AI is indifferent to future outcomes, what stops it from manipulating those outcomes in whatever way is convenient for it’s other goals?
That’s a plus, not a minus.
We can also use utility indifference (or something analogous) to get some useful info out.
It’s a minus if you’re trying to convince someone more results-oriented to keep giving you R&D funding. Imagine the budget meeting:
If the AI is indifferent to future outcomes, what stops it from manipulating those outcomes in whatever way is convenient for it’s other goals?
Indifference means that it cannot value any change to that particular outcome. More details at: http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/18371/2010-1.pdf