Extra clarification: in this example, I’m assuming that we don’t observe the AI, and that we are very unlikely to detect the paperclip. How to get useful work out of the AI is the next challenge, if this model holds up.
I’m pretty sure this model is inherently a dead end for any useful applications. Even without gratuitous antimatter, a sufficiently smart AI trying to minimize it’s future impact will put it’s affairs in order and then self-destruct in some low-collateral-damage way which prevents anything interesting from being learned by analysis of the remains.
It’s a minus if you’re trying to convince someone more results-oriented to keep giving you R&D funding. Imagine the budget meeting:
The EPA is breathing down our necks about venting a billion dollars worth of antimatter, you’ve learned literally nothing, and you consider that a good outcome?
If the AI is indifferent to future outcomes, what stops it from manipulating those outcomes in whatever way is convenient for it’s other goals?
Extra clarification: in this example, I’m assuming that we don’t observe the AI, and that we are very unlikely to detect the paperclip. How to get useful work out of the AI is the next challenge, if this model holds up.
I’m pretty sure this model is inherently a dead end for any useful applications. Even without gratuitous antimatter, a sufficiently smart AI trying to minimize it’s future impact will put it’s affairs in order and then self-destruct in some low-collateral-damage way which prevents anything interesting from being learned by analysis of the remains.
That’s a plus, not a minus.
We can also use utility indifference (or something analogous) to get some useful info out.
It’s a minus if you’re trying to convince someone more results-oriented to keep giving you R&D funding. Imagine the budget meeting:
If the AI is indifferent to future outcomes, what stops it from manipulating those outcomes in whatever way is convenient for it’s other goals?
Indifference means that it cannot value any change to that particular outcome. More details at: http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/18371/2010-1.pdf