On an individualPredictIt market, sometimes you can find a set of “no” contracts whose price (1 share of each) adds up to less than the guaranteed gross take.
Toy example:
Will A get elected? No = $0.30
Will B get elected? No = $0.70
Will C get elected? No = $0.90
Minimum guaranteed pre-fee winnings = $2.00
Total price of 1 share of both No contracts = $1.90
Minimum guaranteed pre-fee profits = $0.10
There’s always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.
But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:
You take PredictIt’s profit fee (10%) into account
You can find enough such “free money” opportunities that your profits compensate for PredictIt’s withdrawal fee (5% of the total withdrawal)
You take into account the opportunity cost of investing in the stock market (average of 10% per year)
In the toy example above, I calculated that you’d lose $0.10 x 10% = $0.01 to PredictIt’s profit fee if you bought 1 of each “No” contract. Your winnings would therefore be $0.99. Of course, if you withdrew it immediately, you’d only take away $0.99 x 95% = $0.94, but you’d still make a 4 cent profit.
In other situations, with more prices, your profit margins might be thinner. The withdrawal fee might turn a gain into a loss, unless you were able to rack up many such profitable trades before withdrawing your money from the PredictIt platform.
I build some software to grab prices off PredictIt and crunch the numbers, and lo and behold, I found an opportunity that seemed to offer 13% returns on this strategy, beating the stock market. Out of all the markets on PredictIt, only this one offered non-negative gains, which I took as evidence that my model was accurate. I wouldn’t expect to find many such opportunities, after all.
2. Less Wrong
Fortunately, I didn’t act on it. I rewrote the price-calculating software several times, slept on it, and worked on it some more the next day.
Then it clicked.
PredictIt wasn’t going to take my losses on the failed “No” contract (the one that resolved to “yes”) into account in offsetting the profits from the successful “No” contracts.
In the toy model above, I calculated that PredictIt would take a 10% cut of the $0.10 net profit across all my “No” contracts.
In reality, PredictIt would take a 10% cut of the profits on both successful trades. In the worst case scenario, “Will C get elected” would be the contract that resolved to “yes,” meaning I would earn $0.70 from the “A” contract and $0.30 from the “B” contract, for a total “profit” of $1.00.
PredictIt would take a 10% cut, or $0.10, rather than the $0.01 I’d originally calculated. This would leave me with $2.00 from two successful contracts, minus $0.10 from the fees, leaving me with $1.90, and zero net profit or loss. No matter how many times I repeated this bet, I would be left with the same amount I put in, and when I withdrew it, I would take a 5% loss.
When I ran my new model with real numbers from PredictIt, I discovered that every single market would leave me not with zero profit, but with about a 1-2% loss, even before the 5% withdrawal fee was taken into account.
If there is a free lunch, this isn’t it. Fortunately, I only wasted a few hours and no money.
There genuinely were moments along the way where I was considering plunking down several thousand dollars to test this strategy out. If I hadn’t realized the truth, would I have gotten a wake-up call when the first round of contracts was called in and I came out $50 in the red, rather than $50 in the black? And then exited PredictIt feeling embarrassed and having lost perhaps $300, after withdrawal fees?
3. Meta Insights
What provoked me to almost make a mistake?
I started looking into it in the first place because I’d heard that PredictIt’s fee structure and other limitations created inefficiencies, and that you could sometimes find arbitrage opportunities on it. So there was an “alarm bell” for an easy reward. Maybe knowing about PredictIt and being able to program well enough to evaluate for these opportunities would be the advantage that let me harvest that reward.
There were two questions at hand for this strategy. One was “how, exactly, does this strategy work given PredictIt’s fee structure?”
The other was “are there actually enough markets on PredictIt to make this strategy profitable in the long run?”
The first question seemed simpler, so I focused on the second question at first. Plus, I like to code, and it’s fun to see the numbers crank out of the machine.
But those questions were lumped together into a vague sort of “is this a good idea?”-type question. It took all the work of analysis to help me distinguish them.
How did I catch my error?
Lots of “how am I screwing this up?” checks. I wrote and rewrote the software, refactoring code, improving variable names, and so on. I did calculations by hand. I wrote things out in essay format. Once I had my first, wrong, model in place, I walked through a trade by hand using it, which is what showed me how it would fail. I decided to sleep on it, and had intended to actually spend several weeks or months investigating PredictIt to try and understand how often these “opportunities” arose before pulling the trigger on the strategy.
Does this error align with other similar errors I’ve made in the past?
It most reminds me of how I went about choosing graduate schools. I sunk many, many hours into creating an enormous spreadsheet with tuition and cost of living expenses, US News and World report rankings, and a linear regression graph. I constantly updated and tweaked it until I felt very confident that it was self-consistent.
When I actually contacted the professors whose labs I was interested in working in, in the midst of filling out grad school applications, they told me to reconsider the type of grad school programs (to switch from bioinformatics to bio- or mechanical engineering). So all that modeling and research lost much of its value.
The general issue here is that an intuitive notion, a vision, has to be decomposed into specific data, model, and problem. There’s something satisfying about building a model, dumping data into it, and watching it crank out a result.
The model assumes authority prematurely. It’s easy to conflate the fit between the model’s design and execution with the fit between the model and the problem. And this arises because to understand the model, you have to design it, and to design it, you have to execute it.
I’ve seen others make similar errors. I saw a talk by a scientist who’d spent 15 years inventing a “new type of computer,” that produced these sort of cloud-like rippling images. He didn’t have a model of how those images would translate into any sort of useful calculation. He asked the audience if they had any ideas. That’s… the wrong way to build a computer.
AI safety research seems like an attempt to deal with exactly this problem. What we want is a model (AI) that fits the problem at hand (making the world a better place by a well-specified set of human values, whatever that means). Right now, we’re dumping a lot of effort into execution, without having a great sense for whether or not the AI model is going to fit the values problem.
How can you avoid similar problems?
One way to test whether your model fits the problem is to execute the model, make a prediction for the results you’ll get, and see if it works. In my case, this would have looked like building the first, wrong version of the model, calculating the money I expected to make, seeing that I made less, and then re-investigating the model. The problem is that this is costly, and sometimes you only get one shot.
Another way is to simulate both the model and the problem, which is what saved me in this case. By making up a toy example that I could compute by hand, I was able to spot my error.
It also helps to talk things through with experienced experts who have an incentive to help you succeed. In my grad school application, it was talking things out with scientists doing the kind of research I’m interested in. In the case of the scientist with the nonfunctional experimental computer, perhaps he could have saved 15 years of pointless effort by talking his ideas over with Steven Hsu and engaging more with the quantum computing literature.
A fourth way is to develop heuristics for promising and unpromising problems to work on. Beating the market is unpromising. Building a career in synthetic biology is promising. The issue here is that such heuristics are themselves models. How do you know that they’re a good fit to the problem at hand?
In the end, you are to some extent forced ultimately into open-ended experimentation. Hopefully, you at least learn something from the failures, and enjoy the process.
The best thing to do is make experimentation fast, cheap, and easy. Do it well in advance, and build up some cash, so that you have the slack for it. Focusing on a narrow range of problems and tools means you can afford to define each of them better, so that it’ll be easier to test each tool you’ve mastered against each new problem, and each new tool against your well-understood problems.
The most important takeaway, then, is to pick tools and problems that you’ll be happy to fail at many times. Each failure is an investment in better understanding the problem or tool. Make sure not to have so few tools/problems that you have time on your hands, or so many that you can’t master them.
You can then imagine taking an inventory of your tools and problems. Any time you feel inspired to take a whack at learning a new skill or a new problem, you can ask if it’s an optimal addition to your skillset. And you can perhaps (I’m really not sure about this) ask if there are new tool-problem pairs you haven’t tried, just through oversight.
A Nonexistent Free Lunch
More Wrong
On an individualPredictIt market, sometimes you can find a set of “no” contracts whose price (1 share of each) adds up to less than the guaranteed gross take.
Toy example:
Will A get elected? No = $0.30
Will B get elected? No = $0.70
Will C get elected? No = $0.90
Minimum guaranteed pre-fee winnings = $2.00
Total price of 1 share of both No contracts = $1.90
Minimum guaranteed pre-fee profits = $0.10
There’s always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.
But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:
You take PredictIt’s profit fee (10%) into account
You can find enough such “free money” opportunities that your profits compensate for PredictIt’s withdrawal fee (5% of the total withdrawal)
You take into account the opportunity cost of investing in the stock market (average of 10% per year)
In the toy example above, I calculated that you’d lose $0.10 x 10% = $0.01 to PredictIt’s profit fee if you bought 1 of each “No” contract. Your winnings would therefore be $0.99. Of course, if you withdrew it immediately, you’d only take away $0.99 x 95% = $0.94, but you’d still make a 4 cent profit.
In other situations, with more prices, your profit margins might be thinner. The withdrawal fee might turn a gain into a loss, unless you were able to rack up many such profitable trades before withdrawing your money from the PredictIt platform.
I build some software to grab prices off PredictIt and crunch the numbers, and lo and behold, I found an opportunity that seemed to offer 13% returns on this strategy, beating the stock market. Out of all the markets on PredictIt, only this one offered non-negative gains, which I took as evidence that my model was accurate. I wouldn’t expect to find many such opportunities, after all.
2. Less Wrong
Fortunately, I didn’t act on it. I rewrote the price-calculating software several times, slept on it, and worked on it some more the next day.
Then it clicked.
PredictIt wasn’t going to take my losses on the failed “No” contract (the one that resolved to “yes”) into account in offsetting the profits from the successful “No” contracts.
In the toy model above, I calculated that PredictIt would take a 10% cut of the $0.10 net profit across all my “No” contracts.
In reality, PredictIt would take a 10% cut of the profits on both successful trades. In the worst case scenario, “Will C get elected” would be the contract that resolved to “yes,” meaning I would earn $0.70 from the “A” contract and $0.30 from the “B” contract, for a total “profit” of $1.00.
PredictIt would take a 10% cut, or $0.10, rather than the $0.01 I’d originally calculated. This would leave me with $2.00 from two successful contracts, minus $0.10 from the fees, leaving me with $1.90, and zero net profit or loss. No matter how many times I repeated this bet, I would be left with the same amount I put in, and when I withdrew it, I would take a 5% loss.
When I ran my new model with real numbers from PredictIt, I discovered that every single market would leave me not with zero profit, but with about a 1-2% loss, even before the 5% withdrawal fee was taken into account.
If there is a free lunch, this isn’t it. Fortunately, I only wasted a few hours and no money.
There genuinely were moments along the way where I was considering plunking down several thousand dollars to test this strategy out. If I hadn’t realized the truth, would I have gotten a wake-up call when the first round of contracts was called in and I came out $50 in the red, rather than $50 in the black? And then exited PredictIt feeling embarrassed and having lost perhaps $300, after withdrawal fees?
3. Meta Insights
What provoked me to almost make a mistake?
I started looking into it in the first place because I’d heard that PredictIt’s fee structure and other limitations created inefficiencies, and that you could sometimes find arbitrage opportunities on it. So there was an “alarm bell” for an easy reward. Maybe knowing about PredictIt and being able to program well enough to evaluate for these opportunities would be the advantage that let me harvest that reward.
There were two questions at hand for this strategy. One was “how, exactly, does this strategy work given PredictIt’s fee structure?”
The other was “are there actually enough markets on PredictIt to make this strategy profitable in the long run?”
The first question seemed simpler, so I focused on the second question at first. Plus, I like to code, and it’s fun to see the numbers crank out of the machine.
But those questions were lumped together into a vague sort of “is this a good idea?”-type question. It took all the work of analysis to help me distinguish them.
How did I catch my error?
Lots of “how am I screwing this up?” checks. I wrote and rewrote the software, refactoring code, improving variable names, and so on. I did calculations by hand. I wrote things out in essay format. Once I had my first, wrong, model in place, I walked through a trade by hand using it, which is what showed me how it would fail. I decided to sleep on it, and had intended to actually spend several weeks or months investigating PredictIt to try and understand how often these “opportunities” arose before pulling the trigger on the strategy.
Does this error align with other similar errors I’ve made in the past?
It most reminds me of how I went about choosing graduate schools. I sunk many, many hours into creating an enormous spreadsheet with tuition and cost of living expenses, US News and World report rankings, and a linear regression graph. I constantly updated and tweaked it until I felt very confident that it was self-consistent.
When I actually contacted the professors whose labs I was interested in working in, in the midst of filling out grad school applications, they told me to reconsider the type of grad school programs (to switch from bioinformatics to bio- or mechanical engineering). So all that modeling and research lost much of its value.
The general issue here is that an intuitive notion, a vision, has to be decomposed into specific data, model, and problem. There’s something satisfying about building a model, dumping data into it, and watching it crank out a result.
The model assumes authority prematurely. It’s easy to conflate the fit between the model’s design and execution with the fit between the model and the problem. And this arises because to understand the model, you have to design it, and to design it, you have to execute it.
I’ve seen others make similar errors. I saw a talk by a scientist who’d spent 15 years inventing a “new type of computer,” that produced these sort of cloud-like rippling images. He didn’t have a model of how those images would translate into any sort of useful calculation. He asked the audience if they had any ideas. That’s… the wrong way to build a computer.
AI safety research seems like an attempt to deal with exactly this problem. What we want is a model (AI) that fits the problem at hand (making the world a better place by a well-specified set of human values, whatever that means). Right now, we’re dumping a lot of effort into execution, without having a great sense for whether or not the AI model is going to fit the values problem.
How can you avoid similar problems?
One way to test whether your model fits the problem is to execute the model, make a prediction for the results you’ll get, and see if it works. In my case, this would have looked like building the first, wrong version of the model, calculating the money I expected to make, seeing that I made less, and then re-investigating the model. The problem is that this is costly, and sometimes you only get one shot.
Another way is to simulate both the model and the problem, which is what saved me in this case. By making up a toy example that I could compute by hand, I was able to spot my error.
It also helps to talk things through with experienced experts who have an incentive to help you succeed. In my grad school application, it was talking things out with scientists doing the kind of research I’m interested in. In the case of the scientist with the nonfunctional experimental computer, perhaps he could have saved 15 years of pointless effort by talking his ideas over with Steven Hsu and engaging more with the quantum computing literature.
A fourth way is to develop heuristics for promising and unpromising problems to work on. Beating the market is unpromising. Building a career in synthetic biology is promising. The issue here is that such heuristics are themselves models. How do you know that they’re a good fit to the problem at hand?
In the end, you are to some extent forced ultimately into open-ended experimentation. Hopefully, you at least learn something from the failures, and enjoy the process.
The best thing to do is make experimentation fast, cheap, and easy. Do it well in advance, and build up some cash, so that you have the slack for it. Focusing on a narrow range of problems and tools means you can afford to define each of them better, so that it’ll be easier to test each tool you’ve mastered against each new problem, and each new tool against your well-understood problems.
The most important takeaway, then, is to pick tools and problems that you’ll be happy to fail at many times. Each failure is an investment in better understanding the problem or tool. Make sure not to have so few tools/problems that you have time on your hands, or so many that you can’t master them.
You can then imagine taking an inventory of your tools and problems. Any time you feel inspired to take a whack at learning a new skill or a new problem, you can ask if it’s an optimal addition to your skillset. And you can perhaps (I’m really not sure about this) ask if there are new tool-problem pairs you haven’t tried, just through oversight.