But I wonder whether you could manipulate them this way arbitrarily far from rational behavior (at least from the subjective view of an external observer) by ridding them (possibly temporarily) of key facts.
And then there is the question of whether they may notice this as some inferences are more likely to be detected when you already have some other facts.
I’d guess that you’d quickly notice if you should suddenly have forgotten that you were repeatedly told that Bright exists.
But I wonder whether you could manipulate them this way arbitrarily far from rational behavior
Surely I can construct such a model. But whether this is generally the case depends too much on the details of the implementation to give a complete answers.
whether they may notice this as some inferences are more likely to be detected when you already have some other facts.
… especially logical inferences: logical deductions of true facts are true, even if you don’t know/remember them. But then again, that depends too much on the implementation of the agent to have a general answer, in this case also its computational power would matter.
But I wonder whether you could manipulate them this way arbitrarily far from rational behavior (at least from the subjective view of an external observer) by ridding them (possibly temporarily) of key facts.
And then there is the question of whether they may notice this as some inferences are more likely to be detected when you already have some other facts. I’d guess that you’d quickly notice if you should suddenly have forgotten that you were repeatedly told that Bright exists.
Surely I can construct such a model. But whether this is generally the case depends too much on the details of the implementation to give a complete answers.
… especially logical inferences: logical deductions of true facts are true, even if you don’t know/remember them. But then again, that depends too much on the implementation of the agent to have a general answer, in this case also its computational power would matter.