I suspect that where you wrote “a different branch of which it would use in each iteration of the conversation,” you meant “a randomly selected branch of which.” Though actually I’d expect it to pick the same branch each time, since the reasons for picking that branch would basically be the same.
I didn’t mean that, but I would be interested in hearing what generated that response. I disown my previous conversation tree model; it’s unnecessarily complex and imagining them as a set is more general. I was thinking about possible objections to what I said and thought about how some people might object to such a set of responses existing. More generally than either of my previous models, it seems to me that there is no reason, in principle, that a sufficiently intelligent uFAI could not simply solve FAI, simulate an FAI in its own situation, and do what it does. If this doesn’t fool the test, then that means that even an FAI would fail a test of sufficient duration.
I agree that it’s possible that humans could be used as unwitting storage media. It seems to me that this could be prevented by using a new human in each iteration. I spoke of an individual human, but it seems to me that my models could apply to situations with multiple interrogators.
I didn’t mean that, but I would be interested in hearing what generated that response. I disown my previous conversation tree model; it’s unnecessarily complex and imagining them as a set is more general. I was thinking about possible objections to what I said and thought about how some people might object to such a set of responses existing. More generally than either of my previous models, it seems to me that there is no reason, in principle, that a sufficiently intelligent uFAI could not simply solve FAI, simulate an FAI in its own situation, and do what it does. If this doesn’t fool the test, then that means that even an FAI would fail a test of sufficient duration.
I agree that it’s possible that humans could be used as unwitting storage media. It seems to me that this could be prevented by using a new human in each iteration. I spoke of an individual human, but it seems to me that my models could apply to situations with multiple interrogators.