My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
…
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
I dunno. I think I could make a ‘machine learning program’ that can predict a test set of ‘every guess out of 1,000,000 was right’ pretty quickly.