A hypothetical question for investors
Let’s suppose you start with $1000 to invest, and the only thing you can invest it in is stock ABC. You are only permitted to occupy two states:
* All assets in cash
* All assets in stock ABC
You incur a $2 transaction fee every time you buy or sell.
Kind of annoying limitations to operate under. But you have a powerful advantage as well. You have a perfect crystal ball that each day gives you the [probability density function](http://en.wikipedia.org/wiki/Probability_density_function) of ABC’s closing price for the following day (but no further ahead in time).
What would be an optimal decision rule for when to buy and sell?
This problem is underspecified unless you tell us something about what the days will be like. Suppose the crystal ball tells you exactly what the day’s closing price will be (ie, a probability distribution function concentrated on one value).
In world A, you have $1k, and on 99% of days, the market will go up by $3, and on 1% of days it will go down by $3. You should always stay in stock, even if this will cause you to lose more than the transaction fee, because you’ll end up paying a second transaction fee on the next day, which the crystal ball didn’t tell you about.
In world B, you have $1k, on 99% of days the market will go down by $3, and on 1% of days it will go up by $3. You should always stay in cash, even if this will cause you to pass up more than the transaction fee, because you’ll end up paying a second transaction fee on the next day, which the crystal ball didn’t tell you about.
The problem is underspecified in a more fundamental way: It does not tell you what to optimise!
One needs to specify both the parameter (eg. expected value) and the time (eg. after 1000 days).
To forestall what I expect to be a common line of advice, I’d like to point out that maximizing the expected return for the next day can lead to very poor results in the long (or even medium) term. Let me illustrate.
Your PDF for the stock price is the same every day and is pretty simple: it’s a fair coin toss. You have 50% probability of earning 101% return and 50% probability of losing all your money (-100% return). The expected return is positive ($5 less transaction fees). This should cause you to invest into this stock every day.
The chances of being left penniless after merely 10 days are (1 − 1/2^10) or about 99.9%.
P.S. A somewhat related issue is the so-called St.Petersburg Paradox which is about three centuries old :-)
Sure, but I think if you look into the crystal ball and see that XYZ has a 50% chance of UTTER RUIN and a 50% chance of business as usual you turn around and ask Omega to let you go back to the real world now.
That’s 50% chance of going to zero and 50% chance of doubling, not business as usual. I don’t see what’s so unusual about it. For example, if you buy an option (a financial instrument, a call or a put) and it expires out of the money, it’s worth goes to zero. That happens all the time and no one calls it UTTER RUIN. Of course you may not want to invest your entire worth into one...
Or if you want a stock example, imagine a small biotech company with a single drug going through FDA trials. If the drug fails, the company is basically worthless, if it passes, the company is rich. That’s a double-or-nothing scenario and again, not particularly uncommon in real life.
By your hypothesis the stock has a 50% chance of utter ruin and a 50% chance of doubling every day. That is very unusual.
Well, the OP hypothetical is very unusual by the same criteria :-P
To be fair though, that’s only because of the strict limitations of the OP’s thought experiment. If you could bet say 1% of your cash on a stock with your specifications, then assuming the transaction fees aren’t a problem you should do so every day.
Well, of course the given setting limits the solutions. For example, if you can invest only a part of your wealth, the Kelly Rule comes into play.
I agree, given the daily stock fluctuations, the only way to make money would be to buy and hold over a long period of time and hope the 52 day moving average is in your favor. I would keep it all in cash.
Why is this an interesting question? Is it a possibly cool puzzle? A metaphor for something else?
It’s kinda tempting to write here: “It is a metaphor for ” and just leave it like that. I wonder how many people would find an explanation for what that meant.
(Actually, it’s a metaphor for dating. Just joking. It’s obviously a metaphor for Unfriendly Oracle AI playing Prisonner’s Dilemma against a Causal Decision Theory. Unless that also is somehow a metaphor for dating.)
For an interesting related problem, look up Kelly betting.
Also Intertemporal portfolio choice.
Or, more generally, the Wikipedia article on Intertemporal portfolio choice.
Who picks the day to day PDF, an adversary that knows your source code in advance, or an indifferent probability distribution (that you may or may not know)?
You need to define “optimal” in the “optimal decision rule”.
Generally speaking, it depends on your utility function (and/or your risk tolerance) and on your time horizon. The shape of the PDF is also pretty important.
Even after determining your wealth, your utility function has to take whether-you-are-currently-holding-stocks as an input, because it affects the probability that you incur a transaction cost in future time steps. I think this piece cannot be evaluated without supposing some pdf of future pdfs. I think this is why people are saying the problem is “underspecified”.
Even then I think we can make some assumptions though. If you have no prior about the pdf of pdfs, a reasonable best best guess is the pdfs you have seen already.
I didn’t think utility mattered at the start, but the numbers get so large so fast that it probably should. So the first issue is identifying if you are theoretically optimizing (max value in min time) or satisficing. Optimizing is simple—always invest when +EV exists/sell when -EV exists. Even if you ruin 99% of the time, you’ll average the highest returns (1 - note below on actual optimization).
The only reason this is complicated is because of the second issue; the risk of ruin. This issue is actually minor in a normal random walk situation.
I’ve simulated a few rules and assumptions to validate the argument. Since this leverages timing, even a negative average rate of return in the underlying shows a very high rate of return for the strategy. http://imgur.com/a/ZrH6g for a quick snapshot. There are a few issues that I didn’t bother to fix (namely with compound rules of holding onto stock), as the effect differs in only a minor way. For reference, actual mean should be about 1.0002, but I’ve only shown 0.9, 1, and 1.1.
The issue becomes more complicated if you stop assuming a normal model. The risk of ruin increases dramatically for any probability of major loss. To mitigate this risk, simply have a trading rule that prevents that risk from materializing, ie: don’t buy if this risk exists, sell if it does. There are lots of ways, both technical and not, to implement this rule. I ran it as a maximum of the sum of negative impact (probability x loss) using two distinct normal distributions (emulating a binary market choice, each with their own distributions). It works, which means the proper technical approach would be even better.
In general, the rule is simple; buy when the average return is positive, hold until it the average return is negative, sell when return is negative. To protect against the risk of ruin, reject any major possibility of loss; eg: do not trade when there is a high SD (or a lumpy negative distribution).
(1) Actual optimization is pretty complicated. All of the above assumes that you get the information and make the transaction immediately. However, true optimization would involve compensating for random walks away from the information you know to be true on the assumption that it would return to the mean. For instance, with the knowledge of the actual range you could break the “buy and hold” strategy below certain thresholds, such as if the stock random walked much higher/lower than would be expected (eg: bought expecting an average of 1.05 → random walked to 1.10; sell at 1.10 as it is probable to return to 1.05 by end of day).
As others have already said, this is way underspecified. But I think the following is at least a fairly decent answer for most plausible ways of filling in the details:
Let’s suppose you are aiming for wealth in the longish term (clearly you aren’t much interested in the short term or else you would be spending some of this money) and let’s suppose your utility is proportional to log(wealth), which is (1) empirically at least semi-plausible, (2) quite nice mathematically, and (3) traditional (it goes all the way back to Bernoulli’s treatment of the St Petersburg paradox).
Then every day you get to choose to multiply your wealth by either about 1 (if you put your assets in cash) or some random factor with known pdf (if you put your assets in the stock). In other words, you get to add to log(wealth) either about 0 or some random addend with known pdf.
If those random things are reasonably well behaved, then with very high probability after a while your log(wealth) is approximately log(original wealth) + the sum of E(delta log wealth). Which suggests that, ignoring horizon effects when you know the game will be ending soon, you always want to choose the outcome that maximizes the expectation of delta log wealth.
(Of course you should include the effects of the transaction fee in this. Since we have neglected the impact of future transaction fees, it might be a good idea to compensate by adding a little extra “friction” and, say, pretending that the transaction fee is $3 instead of $2 when doing the calculation.)
Worked example #1: Consider Lumifer’s example where every day the stock either goes up by 2.01x or goes all the way down to zero. The expectation of delta log wealth, ignoring transaction costs, is 1/2(log 2.01 + log 0) = -infinity, so unless your current wealth is barely more than the transaction cost and you’re already invested in the stock you want to be in cash. (So you will never invest in the stock, so you will never get into the crazy situations where the transaction cost might change your decision.)
Worked example #2: suppose on the first day when you have $1000 you know that the stock will either go to 0.9 or 1.2 of its previous value, each with probability 1⁄2. And suppose what you currently have is cash. Then your options are to stay in cash, with E(delta log wealth) = 0 because this is a no-op, or to buy shares, with E(delta log wealth) = 1/2[log(1198/1000)+log(898/1000)] ~= 1/2(0.181-0.108), which is positive. So in this case you should get into the market.
Worked example #3: same as #2 but now you only have $32. So now if you buy you have $30 in stock and it will move to $27 or $36 with equal probability. So the expectation is 1/2[log(27/32)+log(36/32)] which you can easily check is negative; so in this case you sit on the cash and hope for a better PDF next time.
Would this system ever invest in stock when the probability of losing all the money is non-zero?
Nope. And if what you’re after is the best long-run result and your utility is anything like logarithmic in wealth, this is exactly what you want.
(Although if Pr(lose everything) is small enough then the observation that you almost always get approximately the expectation in the long run is irrelevant unless the run is infeasibly long. So you might want to truncate your return distributions somehow, if you’re prepared to accept a tiny probability of ruin for doing better almost all the time.)
[EDITED to add a missing right-parenthesis.]
Notice that if you have a fixed time horizon the situation changes and you can optimize for how large a probability of ruin should you be prepared to ignore.
That’s why I said, in my original comment, “ignoring horizon effects when you know the game will be ending soon” :-).
This doesn’t seem right. Let’s assume that the stock gives double or nothing, with 51% probability of double. The Kelly Criterion suggests giving 1% of the total payroll in stock. Yes, this neglects the balancing fee. Your argument seems to suggest that we should be all in cash. But the Kelly bet outperforms this.
I don’t understand: the situation here is one where your only option is to be all in cash or all in the stock. The Kelly criterion only makes sense when you can choose an arbitrary fraction to be in each.
(And the Kelly criterion amounts to maximizing E(delta log wealth), which is exactly what I’m proposing. If you have to wager your entire bankroll, any gamble with a nonzero chance of bankrupting you has E(delta log wealth) = -infinity and just sitting on your cash is better.)
Ah, I missed that part of the OP. So then I think your argument is correct.
Let’s say you want to maximise your expected utility. You know the probability density function of the closing price, p(x). Let’s also suppose you know your own utility function, U(y). Let’s say last night’s closing price was x’, and you currently hold Z in assets.
Then if yesterday you were all in cash, your expected utility is U(Z) if you stay in cash, and integral x=-infinity to x=infinity of U((Z-2) (x / x’) p(x)) dx if you switch to stocks. If yesterday you were all in stocks, your expected utility is U(Z-2) if you switch to cash, and integral x=-infinity to x=infinity of U(Z (x / x’) p(x))dx if you stay in stocks.
So choose the larger utility.
Or, to make it much simpler, let’s say you’re trying to maximise your expected return the next day. If you’re in cash, and integral (Z −2) (x/x’) p(x)dx > Z, switch to shares, otherwise stay in cash. If you’re in shares, and integral Z * (x/x’) p(x)dx < Z −2, switch to cash, otherwise stay in stocks.
Reasonable but still missing a piece. Even after determining your wealth, your utility function has to take whether-you-are-currently-holding-stocks as an input, because it affects the probability that you incur a transaction cost in future time steps. I think this piece cannot be evaluated without supposing some pdf of future pdfs. I think this is why people are saying the problem is “underspecified”.
Undetermined unless you specify my utility function. Would, for example, I prefer having $1000, with certainty or having a 50% chance of $900 and a 50% chance of $1200; also how much do I care about the future? Also, even though my crystal ball works only for the following day, I still must have some beliefs about what will happen in the future.
Mockingjay up in here, with this crystal ball the odds will be ever in our favor.
Buy every day when you’ve got good odds (more likely to make cash than lose it) to make money tomorrow. Sell every day you start the day owning stock.
Every 2 days you lose 4 dollars in transaction fees. You also gain/lose money equal to the difference in cost between your initial purchase and sail. Since you know the pdf for the outcome, and only play if the odds are in your favor you’ll probably make a killing.
1) You neglect to account for the transaction fees—following your model would tell you to spend $4 on fees to capture a $1 gain.
2) Why on earth would you auto-sell the stock without even checking the crystal ball?
Yeah, you are right. I commented off the cuff.