Very cool, will take a look. This basically solves question 1. It seems the original Solomonoff work isn’t published anywhere. By the way, the author, William H. Press, is a real polymath! I am curious if there is any extension of this work to agents with finite memory.. as an example, the same situation where you’re screening a large number of people, but now you have a memory where you can store N results of prior screenings for reference. I’m going to look into it..
Seems like a memory version would be identical, just with a smaller n after subtracting the individuals you screen. When you fill up your memory with cleared individuals, why would you then ever want to ‘forget’ them? By stipulation, you learn nothing about other individuals or the population, only about the ones you look at. If you forget them to replace them with a new memory, that de facto makes the n bigger, and worsens your odds since you’ve flushed back into the pool the only individuals you knew for sure you never want to sample again (because they are clear) and so now you may waste a sample to test them again while gaining nothing. And once you remove them from the population via your memory, you’re back to the solved memoryless problem and have to square-root it.
For your second question, this paper describes the square root law, though in a somewhat different setting: Strong profiling is not mathematically optimal for discovering rare malfeasors | PNAS. (Incidentally, a friend of mine used this once in an argument against stop-and-frisk.)
It doesn’t give a complete proof, though it describes it as a “straightforward minimization with a Lagrange multiplier”.
Very cool, will take a look. This basically solves question 1. It seems the original Solomonoff work isn’t published anywhere. By the way, the author, William H. Press, is a real polymath! I am curious if there is any extension of this work to agents with finite memory.. as an example, the same situation where you’re screening a large number of people, but now you have a memory where you can store N results of prior screenings for reference. I’m going to look into it..
Seems like a memory version would be identical, just with a smaller n after subtracting the individuals you screen. When you fill up your memory with cleared individuals, why would you then ever want to ‘forget’ them? By stipulation, you learn nothing about other individuals or the population, only about the ones you look at. If you forget them to replace them with a new memory, that de facto makes the n bigger, and worsens your odds since you’ve flushed back into the pool the only individuals you knew for sure you never want to sample again (because they are clear) and so now you may waste a sample to test them again while gaining nothing. And once you remove them from the population via your memory, you’re back to the solved memoryless problem and have to square-root it.