This raises a general issues of how to distinguish an agent that wants X and fails to get it from one that wants to avoid X.
An agent’s purpose is, in principle, quite easy to detect. That is, there are no issues of philosophy, only of practicality. Or to put that another way, it is no longer philosophy, but science, which is what philosophy that works is called.
Here is a program that can read your mind and tell you your purpose!
FWIW, I tried the program. So far it’s batting 0⁄3.
I think it’s not very well tuned. I’ve seen another version of the demo that was very quick to spot which perception the user was controlling. One reason is that this version tries to make it difficult for a human onlooker to see at once which of the cartoon heads you’re controlling, by keeping the general variability of the motion of each one the same. It may take 10 or 20 seconds for Mr. Burns to show up. And of course, you have to play your part in the demo as well as you can; the point of it is what happens when you do.
An agent’s purpose is, in principle, quite easy to detect. That is, there are no issues of philosophy, only of practicality. Or to put that another way, it is no longer philosophy, but science, which is what philosophy that works is called.
Here is a program that can read your mind and tell you your purpose!
FWIW, I tried the program. So far it’s batting 0⁄3.
I think it’s not very well tuned. I’ve seen another version of the demo that was very quick to spot which perception the user was controlling. One reason is that this version tries to make it difficult for a human onlooker to see at once which of the cartoon heads you’re controlling, by keeping the general variability of the motion of each one the same. It may take 10 or 20 seconds for Mr. Burns to show up. And of course, you have to play your part in the demo as well as you can; the point of it is what happens when you do.
Nice demonstration.