A few years ago I worked on a startup called Premium Poker Tools as a solo founder. It is a web app where you can run simulations about poker stuff. Poker players use it to study.
It wouldn’t have impressed any investors. Especially early on. Early on I was offering it for free and I only had a handful of users. And it wasn’t even growing quickly. This all is the opposite of what investors want to see. They want users. Growth. Revenue.
Why? Because those things are signs. Indicators. Signal. Traction. They point towards an app being a big hit at some point down the road. But they aren’t the only indicators. They’re just the ones that are easily quantifiable.
What about the fact that I had random people emailing me, thanking me for building it, telling me that it is way better than the other apps and that I should be charging for it? What about the fact that someone messaged me asking how they can donate? What about the fact that Daniel Negreanu—perhaps the biggest household name in poker—was using it in one of his YouTube videos?
Those are indicators as well. We can talk about how strong they are. Maybe they’re not as strong as the traditional metrics. Then again, maybe they are more strong. Especially something like Negreanu. That’s not what I want to talk about here though. Here I just want to make the point that they count. You’d be justified in using them to update your beliefs.
Still, even if they do count, it may be simpler to ignore them. They might be weak enough, at least on average, such that the effort to incorporate them into your beliefs isn’t worth the expected gain.
This reminds me of the situation with science. Science says that if a study doesn’t get that magical p < 0.05, we throw it in the trash. Why do we do this? Why don’t we just update our beliefs a small amount off of p = 0.40, a moderate amount off of p = 0.15 and large amount off of p = 0.01? Well, I don’t actually know the answer to that, but I assume that as a social institution, it’s just easier to draw a hard line about what counts and what doesn’t.
Maybe that’s why things work the way they do in startups. Sure, in theory the random emails I got should count as Bayesian evidence and update my beliefs about how much traction I have, but in practice that stuff is usually pretty weak evidence and isn’t worth focusing on.
In fact, it’s possible that the expected value of incorporating it is negative. That you’d expect it to do you more harm than good. To update your beliefs in the wrong direction, on average. How would that be possible? Bias. Maybe founders are hopelessly biased towards interpreting everything through rosy colored glasses and will inevitably arrive at the conclusion that they’re headed to the moon if they are allowed to interpret data like that.
That doesn’t feel right to me though. We shouldn’t just throw our hands in the air and give up. We should acknowledge the bias and update our beliefs accordingly. For example, you may intuitively feel like that positive feedback you got via email this month is a 4⁄10 in terms of how strong a signal it is, but you also recognize that you’re biased towards thinking it is a strong signal, and so you adjust your belief down from a 4⁄10 to a 1.5/10. That seems like the proper way to go about it.
Bayesian traction
A few years ago I worked on a startup called Premium Poker Tools as a solo founder. It is a web app where you can run simulations about poker stuff. Poker players use it to study.
It wouldn’t have impressed any investors. Especially early on. Early on I was offering it for free and I only had a handful of users. And it wasn’t even growing quickly. This all is the opposite of what investors want to see. They want users. Growth. Revenue.
Why? Because those things are signs. Indicators. Signal. Traction. They point towards an app being a big hit at some point down the road. But they aren’t the only indicators. They’re just the ones that are easily quantifiable.
What about the fact that I had random people emailing me, thanking me for building it, telling me that it is way better than the other apps and that I should be charging for it? What about the fact that someone messaged me asking how they can donate? What about the fact that Daniel Negreanu—perhaps the biggest household name in poker—was using it in one of his YouTube videos?
Those are indicators as well. We can talk about how strong they are. Maybe they’re not as strong as the traditional metrics. Then again, maybe they are more strong. Especially something like Negreanu. That’s not what I want to talk about here though. Here I just want to make the point that they count. You’d be justified in using them to update your beliefs.
Still, even if they do count, it may be simpler to ignore them. They might be weak enough, at least on average, such that the effort to incorporate them into your beliefs isn’t worth the expected gain.
This reminds me of the situation with science. Science says that if a study doesn’t get that magical
p < 0.05
, we throw it in the trash. Why do we do this? Why don’t we just update our beliefs a small amount off ofp = 0.40
, a moderate amount off ofp = 0.15
and large amount off ofp = 0.01
? Well, I don’t actually know the answer to that, but I assume that as a social institution, it’s just easier to draw a hard line about what counts and what doesn’t.Maybe that’s why things work the way they do in startups. Sure, in theory the random emails I got should count as Bayesian evidence and update my beliefs about how much traction I have, but in practice that stuff is usually pretty weak evidence and isn’t worth focusing on.
In fact, it’s possible that the expected value of incorporating it is negative. That you’d expect it to do you more harm than good. To update your beliefs in the wrong direction, on average. How would that be possible? Bias. Maybe founders are hopelessly biased towards interpreting everything through rosy colored glasses and will inevitably arrive at the conclusion that they’re headed to the moon if they are allowed to interpret data like that.
That doesn’t feel right to me though. We shouldn’t just throw our hands in the air and give up. We should acknowledge the bias and update our beliefs accordingly. For example, you may intuitively feel like that positive feedback you got via email this month is a 4⁄10 in terms of how strong a signal it is, but you also recognize that you’re biased towards thinking it is a strong signal, and so you adjust your belief down from a 4⁄10 to a 1.5/10. That seems like the proper way to go about it.