Learning-Intentions vs Doing-Intentions
Epistemic Status: In truth, only a slight repackaging of familiar ideas with a new handle I’ve found myself wanting. See The Lean Startup and Riskiest Assumption Testing for other resources.
Suppose you are Bob Steele, structural engineer extraordinaire, and you’ve recently completed your doctorate thesis in advanced bridge aerodynamics. You see how a new generation of bridge technology could significantly improve human welfare. Bridges are not as direct as bed nets or cash transfers, but improved transport infrastructure in developing regions boosts economic productivity flowing through to healthcare, education, and other life-improving services. There’s no time to waste. You found Bridgr.io, put the hard in hardware startup, and get to work bringing your revolutionary technologies to the world.
Common advice is that startups should have a few core metrics which capture their goals, help them track their progress, and ensure they stay focused. For Bridgr.io that reasonably might be revenue, clients, and number of bridges built. There is a danger in this, however.
Although Bridgr.io’s ultimate goal is to have built bridges in the right place, the most pressing tasks are not construction tasks. They’re research tasks. Refining the designs and construction process. Until Bridgr.io hits on a design which works and can be scaled, there is no point sourcing steel and construction workers for a thousand bridges. The first step should be building a sufficient number of test and prototype-bridges (or simulations) not with goal that these bridges will transport anyone, just with the goal of learning.
Phase 1: Figure what to do and how to do it.
Phase 2: Do it.
It’s true that if Bridgr.io tries to build as many bridges as possible as quickly as possible that they will learn along the way what works and what doesn’t, that R&D will automatically happen. But I claim that this kind of learning that happens as a product of trying to do the thing (prematurely) is often inefficient, ineffective, and possibly lethal to your venture.
Superficially, building bridges to have bridges and building bridges to figure out which bridges to build both involve building bridges. Yet in the details they diverge. If you’re trying to do the thing, you often spend your time trying to mobilize enough resources for the all-out effort. You throw everything you’ve got at it, because that’s what it would take to build a thousand bridges all over the globe. Acting to learn is different. Rather than scale, it’s about taking carefully-selected, targeted actions to reduce uncertainty. You don’t seek a contract for fifty bridges, instead you focus on building three crazy different designs to help you test your assumptions.
Among other things, value of information can decline rapidly with scale. If you can build five bridges, as far as the fundamentals go, you can build fifty. And scaling your current process doesn’t necessarily test the uncertainty which matters. Perhaps building fifty bridges in the United States doesn’t test the viability of building them in Central Africa. If you were building to learn, you’d build a couple here and a couple there.
The mistake I see, and the motivation for this post, is many people skipping over the learning phase, or trying to smush it into the actual doing. They seek to maximize their metrics now rather than first investing figuring out what is they really should be doing. What will work at all. The mistake is always operating with a doing-intention when really a learning-intention is needed first.
Doing-Intention
You’re building a bridge because you want a bridge. You want a physical outcome in the world. You’re doing the actual thing.
Learning-Intention
You’re building a bridge because you’re trying to understand bridges better. It’s true that ultimately you want an actual physical bridge, but this bridge isn’t for that. This bridge is just about gaining information about what doesn’t fall down.
In the context of Effective Altruism
I have some concern that this error is common among those doing directly altruistic work. If, like Bob Steele, you believe that your intervention could be helping people right now, then it’s tempting to want to ramp up production and just do the good thing. Every delay might result in the loss of lives. When the work is very real, it’s hard to step back and treat it like an abstract information problem. (Possibly the pressures are no weaker in the startup world, but that realm might benefit from stronger cultural wisdom exhorting people not scale before they have “product-market fit.”)
Possible causes of this error-mode
Why do people make this class of mistake? A few guesses:
The pressure to present results now. Donors, funders, and employees especially want to see something for time and money invested.
The dislike of uncertainty. It’s more comfortable to decide to fully run with plausibly good Plan A, whose likelihoods of success you can trump up, than stay in limbo while you test Plans A, B, and C.
The underestimation of how much uncertainty remains even after early evidence suggests a plan or direction might be a good idea. As an example, a company I once worked for spent over a year pursuing a misguided strategy because using it they landed one large deal with what turned out to be an atypical client.
Although people have the notion of an experimental mindset and value of information, there’s a failure to adopt an experimental/research mindset if certainty is above a certain level. People think of conducting experiments when they don’t know whether something will work at all, but not when the overall picture looks promising and what remains is implementation details. For instance, if I have a program to distribute bed nets, I might have 75% credence this will do a lot of good, even if I’m uncertain about just how much good, what my opportunity costs are, and the true best way to implement. At the point of 75% confidence (or much less), I might stop thinking my program as experimental and fall into a maximizing, doing-intention. Show everyone them big results.
This is lethal if your goals are extremely long-term with minimal feedback, e.g. long-termist effective altruists. There will be many plausibly good things to do, but if you scale up prematurely by turning your experiments into all-out interventions, then you might either miss far greater opportunities or fail to implement your intervention in a way that works at all on the long-term scale.
Community-feedback can also push in the wrong direction. People looking in from the outside into an EA project will approve of efforts to do good backed by a decent plausibility story for effectiveness. After that, scale and certainty are probably perceived as more impressive than an array of small-scale experiments and a list of uncertainties.
Final caveat: the perils of Learning-Intention
As much as I’m advocating for them here, there are of course a great many perils associated with learning-intentions too. Learning can easily become divorced from real-world goals and picking the right actions to learn the information you actually need is no small challenge. Faced between a choice between degenerate doing-intention and degenerate learning-intention, I think I would pick the former given is more likely to have empiricism on its side.
I’ve been in several startups that died this way. They were hardware startups; there was a big jump between research tools (relatively cheap, flexible, low throughput, expert-labor-intensive) and limited production tools (10X more expensive, much less flexible, 100X higher throughput, capital intensive). If you buy production tools before your process really works, you lose flexibility to further develop your process. The loss of flexibility is often fatal.
Excellent post!
Here’s some devil’s-advocacy that comes to mind. You say:
Suppose that you adopt the “learning mindset”, and undertake some learning-focused actions. As you say, there’s a danger of “lost purposes”—but this can actually manifest in multiple, importantly different, ways!
One version of that failure mode is simply continuing to learn, indefinitely, without ever doing anything. (This, arguably, is much of modern academia.)
Suppose you avoid that failure mode, and, having learned something, you declare a victory. Fine; but how do you know that what you’ve learned is of any use? How do you know it’s not just nonsense? (This, arguably, is also much of modern academia.)
The solution seems obvious: if you think you’ve learned something, switch to “doing mindset”, and do the thing, applying what you’ve learned. If your learning was worth anything, then your doing will bear that out. Right?
Well, that may be true if what you’ve learned was about how to do the thing. But what if the key questions, and the ones which you were (or should have been!) most interested in, were not how to do the thing, but which thing(s) to do, and how to evaluate what you’ve done, and other, trickier, less practical (but more globally impactful) questions?
Then you may think you’ve learned something useful, and do things on that basis, but actually what you’ve learned is either wrong, or, more insidiously, not enough (cf. “a little knowledge is a dangerous thing”). If you’d’ve kept learning, you’d’ve discovered that; but you were in a hurry to do…
Thus it seems to me that “learning mindset” must perpetually thread this needle—between “how do I know I’ve learned anything”, and “how do I know I’ve learned enough”. And it is difficult to say whether “doing mindset” will suffice to keep you on that straight and narrow path…
In the lean startup paradigma, the idea is that you have assumptions about your bridge project before you start the bridge project.
After you build the bridge you automatically find through empiric feedback which assumptions turned out to be right and which assumptions were wrong.
Once you have empiric evidence that the assumptions are correct you go to out and fundraise based on being able to argue that you have evidence that your assumptions for Bridgr.io are correct.
If the investors are convinced that you learned that the assumptions of the company are sound they will think that it’s a good investment and pour money into it to allow you to scale up.
Thanks for the link! Sorry to change from the term “mindset” to “intention” on you.
Though, like you said, I’ve heard of these ideas in startup land before, I found your post particularly lucid. Last spring when I tried a TAP a week, I had the learning-intention and also had a hard time articulating that.
I notice there’s also an uncomfortable sort of suffering I experience when I approach a task/project/goal that is fundamentally a learning/explore objective, but I think of it as a doing/exploit. It feels like me getting hyper focused on the outcome/production, and if I don’t get the one I want I dismiss thoughts of “Well you learned something along the way!” as grasping a straws/justification.
“learned something along the way” is the wrong level. Specify what you learned and make a conscious evaluation of whether that knowledge has value in future production. Search/exploit is fractal and recursive: you’re searching for search strategies while executing such strategies to search for production knowledge. Turtles all the way down.
Agreed. When I used to think of “learning something along the way”, it was a very passive sort of framing. I wasn’t able to think of search/exploit as a very active, “fractal and recursive” activity.
I like the identification of the different things you get from a bridge (learning and transport), but I believe success for a startup (or for any endeavor) comes from _NOT_ separating the goals. You must find a way to excel at both goals, and the goals you haven’t stated (like showing potential partners and customers that you’re worth investing in (this goal probably overrides all others for some phases of your life/project).
Building each bridge must be done in a way that you learn details along the way, and modify your plan accordingly. And apply that learning to the next bridge, when you can make bigger changes. Outrageous ideas (high-risk, high-reward) can be simulated or tried in small/unimportant ways, generally funded by success of prior projects. Which means you have to have successful projects before you can take risks.
This recursive strategy (do some small/safe good, use the rewards to fund bigger/harder/riskier good, use those rewards for still bigger things, etc.) applies at almost every level, from individual to company to industry to civilization.
Your approach is probably appropriate for a software startup, but it’s horrible for a bridge. If your first bridge collapses and kills a bunch of people, your company won’t survive to build bridge 2. Some industries tolerate failure better than others, and a company needs to set its risk tolerances accordingly.
Wait. I’m advocating _NOT_ experimenting very much with your first bridge—build a fairly standard one first. You will learn a lot in doing so, and end up with a bridge that works. My point is that you have to both produce and learn at the same time, not do one then the other.
The intended meaning of the post is that there can be “producing in order to produce” and “producing in order to learn”. The producing to learn might involve very real producing, but underlying goal is different. You might be trying to get real investment from real investors, but the goal could be a) receiving money, b) testing your assumptions about whether you can raise successfully.
In practice, I think you’re right that sometimes (or often) both intentions are necessary. You need to get users both to learn and to survive. Still, the two intentions trade off against each other and it’s possible to forget about one or the other. My primary recommendation is to be aware and deliberate about your intentions so that you have the right ones at the right time in the right amount.
This seems a lot like “shut-up and multiply” at the meta-level.
Also borrowing from start-up culture, there is a closely related concept to what you describe called de-risking. Importantly that is in terms of financial risk rather than utility risk, but if we are talking about hardware the two should track pretty closely; I would be very surprised if you found a reliable and scalable bridge design which somehow did not improve the returns on investment. The biggest difference I see between them is that utility-risk space is not under the same time pressures as finance-risk space.
I liked the distinction you make. Restating, very similar activities will often have different targets and so need to view the same actions from very different view points. That said, I don’t think it’s a binary, but more a continuum. For a successful, ongoing and growing enterprise more than just understanding the distinction is needed.
I think that was clearly implied but I didn’t really see that developed in the discussion but think it should be. The organization will not just be in “learning” phase of existence or “doing” phase but both throughout its life. There needs to be a third leg that then mediates between the two philosophically different view points to create the real synergies between the two a successful enterprise will need.