My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
Otherwise I one box.
Reasoning, it should be fairly obvious from this strategy that I am likely to one box, predicting Omega being hard. So if I can tell Omega is likely to predict this and I can predict Omega accurately, I’ll then two box.
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Not sure this will work, I’d like to try to do the math at some point.
My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
…
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
My strategy. I build a machine learning program that takes in half the data available about Omega and how well it predicts people who are likely to perform complex strategies, and data mines on that. If the computer program manages a high accuracy on the predicting the test set, and shows a significant chance that it will predict me to one box, then I two box.
Otherwise I one box.
Reasoning, it should be fairly obvious from this strategy that I am likely to one box, predicting Omega being hard. So if I can tell Omega is likely to predict this and I can predict Omega accurately, I’ll then two box.
The goal is to try to force Omega into predicting that I will one box, while being more powerful than Omega in predictive power.
Not sure this will work, I’d like to try to do the math at some point.
Dunno, you’d have to pay me a lot more than $1000 to go to all that trouble. Doesn’t seem rational to do all that work just to get an extra $1000 and a temporary feeling of superiority.
I dunno. I think I could make a ‘machine learning program’ that can predict a test set of ‘every guess out of 1,000,000 was right’ pretty quickly.