Very cool, will take a look. This basically solves question 1. It seems the original Solomonoff work isn’t published anywhere. By the way, the author, William H. Press, is a real polymath! I am curious if there is any extension of this work to agents with finite memory.. as an example, the same situation where you’re screening a large number of people, but now you have a memory where you can store N results of prior screenings for reference. I’m going to look into it..
Seems like a memory version would be identical, just with a smaller n after subtracting the individuals you screen. When you fill up your memory with cleared individuals, why would you then ever want to ‘forget’ them? By stipulation, you learn nothing about other individuals or the population, only about the ones you look at. If you forget them to replace them with a new memory, that de facto makes the n bigger, and worsens your odds since you’ve flushed back into the pool the only individuals you knew for sure you never want to sample again (because they are clear) and so now you may waste a sample to test them again while gaining nothing. And once you remove them from the population via your memory, you’re back to the solved memoryless problem and have to square-root it.
Very cool, will take a look. This basically solves question 1. It seems the original Solomonoff work isn’t published anywhere. By the way, the author, William H. Press, is a real polymath! I am curious if there is any extension of this work to agents with finite memory.. as an example, the same situation where you’re screening a large number of people, but now you have a memory where you can store N results of prior screenings for reference. I’m going to look into it..
Seems like a memory version would be identical, just with a smaller n after subtracting the individuals you screen. When you fill up your memory with cleared individuals, why would you then ever want to ‘forget’ them? By stipulation, you learn nothing about other individuals or the population, only about the ones you look at. If you forget them to replace them with a new memory, that de facto makes the n bigger, and worsens your odds since you’ve flushed back into the pool the only individuals you knew for sure you never want to sample again (because they are clear) and so now you may waste a sample to test them again while gaining nothing. And once you remove them from the population via your memory, you’re back to the solved memoryless problem and have to square-root it.