Human players can retry when they attempt to make an illegal move on lichess, and can click on a piece to see which moves are legal. I wonder how much ChatGPT’s ELO improves if you allow it to retry moves until it makes a legal one.
waterbot
waterbot’s Shortform
How much danger should we accept from prediction markets?
Financial tools can be used for both good and evil. For example, a short-sellers might conduct some tests and find out a company has been selling arsenic tainted breakfast cereal. They could short the company and then write a report revealing this information. They make money, and at the same time save consumers (and they help traders accurately price the stock).
But short-sellers can also use this financial tool for evil. They could short the cereal company and then just start bombing factories. Or, more commonly, they could release fake information on Twitter.
This would be less common if traders didn’t have easy access to the necessary financial tools. This gives some credence to the people who say we should ban short-selling. But if you generally believe in market-based solutions, this probably seems like an ignorant, efficiency-destroying regulation (not to mention difficult or impossible to enforce). But just because short selling can be modeled in a way that shows it improves welfare doesn’t mean it’s net positive in practice. You have to actually investigate whether short-sellers create more utils than they destroy. I would guess short-selling is so important for conveying accurate information that it lands easily in the green, but it’s possible I’m mistaken.
Prediction markets also seem obviously net positive. They are an amazing amazing tool for getting insider information to the outside and making this information easily legible. But this power means traders can also use them more effectively use them for evil. The short-seller doesn’t know whether a fake news campaign will be washed out by other market movements, but with a prediction market the short seller can just bet directly on events and then manipulate those events. Imagine you woke up tomorrow and saw your name on the front page of a prediction market: “Will [your name] die before 2024?” Because people have easy access to this new financial tool, they might kill you. (Anyone with an interest in true crime knows this problem already exists with life insurance, hence there are regulations restricting whose lives you can insure).
For this reason, centralized prediction markets are safer than completely decentralized markets. These markets are more easily aligned with human values, so that people are incented to improve overall welfare rather than hurt people.
So why then does Kalshi have an event contract for the number of measles cases in the United States? Kalshi offers only a carefully curated set of event contracts, so I would expect they think carefully before they offer contracts for a new event. But this contract seems to obviously incent traders to spread measles. From Kalshi’s rules, it seems like the maximum position for the measles market is $25,000 per member. Assuming members don’t circumvent the maximum, this low limit mitigates the problem somewhat. It probably wouldn’t be profitable to transport infected people into the United States. But as it stands, anyone infected with measles could make about $15,000 by spreading the disease to just 200 people. Presumably Kalshi considered this possibility but thought the risk was small enough that it was worthwhile to continue. I wonder how they reached this conclusion.
If the problem exists on Kalshi, I imagine it’s worse on other, less restrictive markets (I’ve focused on Kalshi because I’m not very familiar with other prediction markets). Eventually there probably will be a case of someone manipulating an outcome in an extremely harmful way. This might hurt the future of prediction markets, and potentially cost lives. I feel prediction markets should try very hard to avoid this, especially as they are still relatively novel and need to avoid being squashed by regulators.
On the other hand, if decentralized prediction markets are inevitable, perhaps we need to quickly learn to live with the consequences. At least there will be lots of other benefits alongside the harm.
Common misconception about income tax discouraging work
I once heard a political commentator say something like this: “we tax cigarettes to reduce smoking, and then we tax success and we’re surprised this reduces success.”
This at least roughly corresponds to a misconception I learned from the economist Gregory Clark. He said if you ask economics undergraduates (a group with an unusual affinity for economics!) what effect a higher tax rate will have on hours worked, they almost always make the same mistake: they reflexively assume the tax hike leads people to work less hours.
They presumably make the too-hasty assumption that less income per hour automatically means less incentive to work. But the actual effect depends on the individuals preferences, specifically how they value income (or goods) versus leisure. They might increase their effort to offset the loss of income, or they may find that the new lower wage isn’t enough to justify working a single hour.
I find it interesting that Clark reported this as an extremely common mistake, even though economics students would be well versed in the model used to correct the error. Perhaps their intuitions take over when they don’t have time to draw budget constraints and indifference curves, or perhaps they have a belief about most people’s preferences, and that belief supports their answer.