Quant, systems thinker, anarchist.
I write at https://entropicthoughts.com
My inbox is lw[at]xkqr.org
Quant, systems thinker, anarchist.
I write at https://entropicthoughts.com
My inbox is lw[at]xkqr.org
I just wouldn’t use the word “Kelly”, I’d talk about “maximizing expected log money”.
Ah, sure. Dear child has many names. Another common name for it is “the E log X strategy” but that tends to not be as recogniseable to people.
you say “this is how to mathematically determine if you should buy insurance”.
Ah, I see your point. That is true. I’d argue this isolated E log X approach is still better than vibes, but I’ll think about ways to rephrase to not make such a strong claim.
what do you mean when you say this is what Kelly instructs?
Kelly allocations only require taking actions that maximise the expectation of the joint distribution of log-wealth. It doesn’t matter how many bets are used to construct that joint distribution, nor when during the period they were entered.
If you don’t know at the start of the period which bets you will enter during the period, you have to make a forecast, as with anything unknown about the future. But this is not a problem within the Kelly optimisation, which assumes the joint distribution of outcomes already exists.
This is also how correlated risk is worked into a Kelly-based decision.
Simultaneous (correlated or independent) bets are only a problem in so far as we fail to construct a joint distribution of outcomes for those simultaneous bets. Which, yeah, sure, dimensionality makes itself known, but there’s no fundamental problem there that isn’t solved the same way as in the unidimensional case.
Edit: In more laymanny terms, Kelly requires that, for each potential combination of simultaneous bets you are going to enter during the period, you estimate the probability distribution of wealth outcomes (and this probability distribution should account for any correlations) after the period has passed. Given that, Kelly tells you to choose the set of bets (and sizes in each) that maximise the expected log of wealth outcomes.
Kelly is a function of actions and their associated probability distributions of outcomes. The actions can be complex compound actions such as entering simultaneous bets—Kelly does not care, as long as it gets its outcome probability distribution for each action.
I’m confused by the calculator.
The probability should be given as 0.03 -- that might reduce your confusion!
Kelly is derived under a framework that assumes bets are offered one at a time.
If I understand your point correctly, I disagree. Kelly instructs us to choose the course of action that maximises log-wealth in period t+1 assuming a particular joint distribution of outcomes. This course of action can by all means be a complicated portfolio of simultaneous bets.
Of course, the insurance calculator does not offer you the interface to enter a periodful of simultaneous bets! That takes a dedicated tool. The calculator can only tell you the ROI of insurance; it does not compare this ROI to alternative, more complex portfolios which may well outperform the insurance alone.
If you get caught in a flood your whole neighborhood probably does too
This is where reinsurance and other non-traditional instruments of risk trading enter the picture. Your insurance company can offer flood insurance because they insure their portfolio with reinsurers, or hedge with catastrophy bonds, etc.
The net effect of the current practices of the industry is that fire insurance becomes slightly more expensive to pay for flood insurance.
I have a hobby horse that I think people misunderstand the justifications for Kelly, and my sense is that you do too
I don’t think I disagree strongly with much of what you say in that article, although I admit I haven’t read it that thoroughly. It seems like you’re making three points:
Kelly is not dependent on log utility—we agree.
Simultaneous, independent bets lower the risk and applying the Kelly criterion properly to that situation results in greater allocations than the common, naive application—we agree.
If one donates one’s winnings then one’s bets no longer compound and the expected profit is a better guide then expected log wealth—we agree.
In A World of Chance, Brenner, Brenner, and Brown look at this same question from a historic perspective, and (IIRC) conclude that gambling is about as damaging as alcohol, both for individuals and society. In other words, it should be legal (it gives the majority a relatively safe good time) but somewhat controlled (some cannot handle it and then it is very bad).
Do these more recent numbers corroborate that comparison to alcohol?
Oh, these are good objections. Thanks!
I’m inclined to 180 on the original statements there and instead argue that predictive modelling works because, as Pearl says, “no correlation without causation”. Then an important step when basing decisions on predictive modelling is verifying that the intervention has not cut off the causal path we depended on for decision-making.
Do you think that would be closer to the truth?
The Demon King donned a mortal guise, bought shares in “The Demon King will attack the Frozen Fortress”, and then attacked the Frozen Fortress.
I’m curious: didn’t the market work exactly as intended here? I mean, it helped them anticipate the Demon King’s next moves – it’s not the market’s fault that they couldn’t convert foresight into operational superiority.
The King effectively sold good information on his battle plans; he voluntarily leaked military secrets against pay. The Citadel does not have to employ a spy network, because the King spies for them. This should be kind of a good deal, right?
However I also do frequently spend more time on close decisions. I think this can be good praxis. It is wasteful in the moment, but going into detail on close decisions is a great way to learn how to make better decisions. So in any decision where it would be great to improve your algorithm, if it is very close, you might want to overthink things for that reason.
In my experience, the more effective way to learn from close decisions is to just pick one alternative and then study the outcome and overthink the choice, rather than deliberate harder before choosing. This is related to what Cedric Chin describes in Action Produces Information: by going faster through close decisions, we both have more information about the consequences revealed to us, and we can run more experiments in parallel.
That said, I am very hardcore about coinflipping even not-so-close decisions, and made a tool for it.
Thanks for taking the time to dive into this. I’ve spent the past few evenings iterating on a forecasting bot while doing embarrassingly little research myself[1], and it seems like I have stumbled into the same approach as Five Thirty Nine, and my bot has the exact same sort of problems. I’ll write more later about why I think some of those problems are not as big as they may seem.
But your article also gave me some ideas that might lead to improvements. Thanks!
[1]: In this case, I prioritise the two weeks in the lab over the hour in the library. I’m doing it not to make a good forecasting bot but to learn the APIs involved.
That is, confounding could go both ways here; the effect could be greater than it appears, rather than less.
Absolutely, but if we assume the null hypothesis until proven otherwise, we will prefer to think of confounding as creating effect that is not there, rather than subduing an even stronger effect.
I’ll reanalyse that way and post results, if I remember.
Yes, please do! I suspect (60 % confident maybe?) the effect will still be at least a standard error, but it would be nice to know.
I made a script run in the background on my PC, something lik
Ah, bummer! I also have this problem solved for computer time, and I was hoping you had done something for smartphone carriage.
(Note, by the way, that a uniformly random delay is not as surprising as an exponentially distributed delay. Probably does not matter for your usecase, and you might already know all of that...)
Many of the existing answers seem to confuse model and reality.
In terms of practical prediction of reality, it would be a mistake to emit a 0 or 1, always, because there’s always that one-in-a-billion chance that our information is wrong – however vivid it seems at the time. Even if you have secretly looked at the hidden coin and seen clearly that it landed on heads, 99.999 % is a more accurate forecast than 100 %. It could have landed on aardvarks and masqueraded as heads, however unlikely, that is a possibility. Or you confabulated the memory of seeing the coin from a different coin you saw a week ago – also not so likely, but happens. Or you mistook tails for heads – presumably happens every now and then.
When it comes to models, though, probabilities of 0 and 1 show up all the time. Getting a 7 when tossing a d6 with the standard dice model simply does not happen, by construction. Adding two and three and getting five under regular field arithmetic happens every time. We can argue whether the language of probability is really the right tool for those types of questions, but taking a non-normative stance, it is reasonable for someone to ask those questions phrased in terms of probabilities, and then the answers would be 0 % and 100 % respectively.
These probabilities also show up in limits and arguments of general tendency. When a coin is tossed, the probability of getting only tails is 0 % as long as you keep tossing whenever you get tails. In a random walk, the probability of eventually crossing the origin is 100 %. When throwing a d6 for long enough, the mean value will end up within the range 3-4 with probability 100 %.
These latter two paragraphs describe things that apply only to our models, not to reality, but they can serve as a useful mental shortcut as long as one is careful about applying them blindly.
This analysis suffers from a fairly clear confounder: since you are basing the data on which days you actually listened to music, there might be a common antecedent that both improves your mood and causes you to listen to music. As a silly example, maybe you love shopping for jeans, and clothing stores tend to play music, so your mood will, on average, be better on the days you hear music for this reason alone.
An intention-to-treat approach where you make the random booleans the explainatory variable would be better, as in less biased and suffer less from confounding. It would also give you less statistical power, but such is the cost of avoiding false conclusions. You may need to run the experiment for longer to counterbalance.
It appears that listening to music, in the short-term: [...] makes earworms play in my mind for slightly less of the time
Whenever I suffer from an earworm, my solution has for a long time been to just play and listen to that song once, sometimes twice. For some reason, this satisfies my brain and it drops it. Still counter-intuitive, but you might want to try it.
On a completely separate note:
Both response variables were queried by surprise, 0 to 23 times per day (median 6), constrained by convenience.
How was this accomplished, technically? I’ve long wanted to do similar things but never bothered to look up a good way of doing it.
If Q, then anything follows. (By the Principle of Explosion, a false statement implies anything.) For example, Q implies that I will win $1 billion.
I’m not sure even this is the case.
Maybe there’s a more sophisticsted version of this argument, but at this level, we only know the implication Q=>$1M is true, not that $1M is true. If Q is false, the implication being true says nothing about $1M.
But more generally, I agree there’s no meaningful difference. I’m in the de Finetti school of probability in that I think it only and always expresses our personal lack of knowledge of facts.
Thanks everyone. I had a great time!
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform
Seeing how the crowd forecast routinely performs at a superhuman level itself, isn’t it an unfairly high bar to clear? Not invalidating the rest of your arguments – the methodological problems you point out are really bad – but before asking the question about superhuman performance it makes a lot of sense to fully agree on what superhuman performance really is.
(I also note that a high-quality forecasting platform suffers from self-selection by unusually enthusiastic forecasters, bringing up the bar further. However, I don’t believe this to be an actual problem because if someone is claiming “performance on par with humans” I would expect that to mean “enthusiastic humans”.)
Even so, at some level of wealth you’ll leave more behind by saving up the premium and having your children inherit the compound interest instead. That point is found through the Kelly criterion.
(The Kelly criterion is indeed equal to concave utility, but the insurance company is so wealthy that individual life insurance payouts sit on the nearly linear early part of the utility curve, whereas for most individuals it does not.)