It seems to me to be relevant not only what Quirrell’s motivations are, but how much smarter he is.
Right, I’m assuming insanely much.
Certainly that question has an answer, and the answer isn’t complete certainty. That said… I’m not sure allowing any sort of reasoning with uncertainty over future modifications helps anything here. In fact, I think it makes the problem harder.
I agree completely. The real problem we have to solve is much harder than the toy scenario in this post; the point of the toy scenario was to help focusing on one particular aspect of the problem.
Right, I’m assuming insanely much.
I agree completely. The real problem we have to solve is much harder than the toy scenario in this post; the point of the toy scenario was to help focusing on one particular aspect of the problem.