Self review: I really like this post. Combined with the previous one (from 2022), it feels to me like “lots of people are confused about Kelly betting and linear/log utility of money, and this deconfuses the issue using arguments I hadn’t seen before (and still haven’t seen elsewhere)”. It feels like small-but-real intellectual progress. It still feels right to me, and I still point people at this when I want to explain how I think about Kelly.
That’s my inside view. I don’t know how to square that with the relative lack of attention the post got, and it feels weird to be writing it given that fact, but oh well. There are various stories I could tell: maybe people were less confused than I thought; maybe my explanation is unclear; maybe I’m still wrong on the object level; maybe people just don’t care very much; maybe it just happened not to get seen.
If I were writing this today, my guess is:
It’s worth combining the two posts into one.
The rank optimization stuff is fine to cut, given that I tentatively propose it in one post and then in the next say “probably not very useful”. Maybe have a separate post for exploring it. No need to go into depth on “extending Kelly outside its original domain”.
The charity stuff might also be fine to cut. At any rate it’s not a focus.
Someone sent me an example function satisfying the “I’m pretty sure yes” criteria, so that can be included.
Not sure if this belongs in the same place, but I’d still like to explore more the “what if your utility function is such that maximizing expected utility at time t1 doesn’t maximize expected utility at time t2?” thing. (I thought I wrote this in the post somewhere, but can’t see it: the way I’d explore this is from the perspective of “a utility function is isomorphic to a description of betting preferences that satisfy certain constraints, so when we talk about a utility function like that, what betting preferences are we talking about?” Feels like the kind of thing someone’s likely already explored, but I haven’t seen it if so.)
Self review: I really like this post. Combined with the previous one (from 2022), it feels to me like “lots of people are confused about Kelly betting and linear/log utility of money, and this deconfuses the issue using arguments I hadn’t seen before (and still haven’t seen elsewhere)”. It feels like small-but-real intellectual progress. It still feels right to me, and I still point people at this when I want to explain how I think about Kelly.
That’s my inside view. I don’t know how to square that with the relative lack of attention the post got, and it feels weird to be writing it given that fact, but oh well. There are various stories I could tell: maybe people were less confused than I thought; maybe my explanation is unclear; maybe I’m still wrong on the object level; maybe people just don’t care very much; maybe it just happened not to get seen.
If I were writing this today, my guess is:
It’s worth combining the two posts into one.
The rank optimization stuff is fine to cut, given that I tentatively propose it in one post and then in the next say “probably not very useful”. Maybe have a separate post for exploring it. No need to go into depth on “extending Kelly outside its original domain”.
The charity stuff might also be fine to cut. At any rate it’s not a focus.
Someone sent me an example function satisfying the “I’m pretty sure yes” criteria, so that can be included.
Not sure if this belongs in the same place, but I’d still like to explore more the “what if your utility function is such that maximizing expected utility at time t1 doesn’t maximize expected utility at time t2?” thing. (I thought I wrote this in the post somewhere, but can’t see it: the way I’d explore this is from the perspective of “a utility function is isomorphic to a description of betting preferences that satisfy certain constraints, so when we talk about a utility function like that, what betting preferences are we talking about?” Feels like the kind of thing someone’s likely already explored, but I haven’t seen it if so.)