Your Prioritization is Underspecified
If you were really convinced that the next task on your to do list was the very best thing to advance your goals you’d feel a lot more interested in it. Some part of you believes this to be the case and other parts very obviously don’t. Thus, uncertainty about how to prioritize. My impression is that various systems for prioritization get much of their power from addressing some core ambiguity and that people thus self-sort into those systems based on whether that ambiguity was a key bottleneck for them. This post isn’t about outlining another set of antidotes but merely mapping out some (not necessarily all) various kinds of ambiguity.
Short term goals have nice legible feedback loops with yummy rewards. Long term goals face an uphill battle, but potentially give higher effort:reward ratios if successfully fought. Presumably if our throughput rate were high enough and our incoming tasks low enough we wouldn’t need to rearrange our queue and a simple FIFO scheme would do. In practice there are always lots of opportunities rolling in, and this pileup only gets worse the better you get at prioritizing as people put more opportunities on your plate as you get more done. So we’re stuck with prioritizing. Let’s sketch some key fronts.
Two types of ambiguity: risk and uncertainty
When I use the term ambiguity in this post, I’ll be referring to both risk and uncertainty as potential roadblocks. Risk is within model ambiguity. Uncertainty is outside of model ambiguity. If I ask you to bet on a coin flip, you’ll model the odds as 50:50 and your downside risk will be that 50 percent chance of loss. That model doesn’t include things like ‘the person I’m making a bet with pulls out a gun and shoots me while the coin is in the air.’ That broader context within which the risk model is situated deals with uncertainty, including the uncertainty over whether or not your model is correct (weighted coins). Most of the other categories could be further broken down along the risk/uncertainty dimension, but that is left as run-time optimization in the interests of brevity.
Between-task ambiguity
There are a few ways that we are already prioritizing, and confusion about which one would be best in a given situation can serve as a roadblock.
In First-Due prioritization we simply do whatever will have the nearest deadline.
In Longest-Chain prioritization we prioritize whatever task will have the longest amount of time or largest number of sub tasks to get done.
In Shortest-Chain prioritization we want to clear up the total list size as much as possible so we get all the shortest tasks done quickly.
In Most-Salient prioritization we allow the vividness and emotional immediacy of tasks serve as the goad.
In Most-Likely-Failure prioritization we look for tasks that have a step we are highly uncertain about and see if we can test that step, because if it fails we can maybe throw out a whole task and thus increase total throughput.
In Most-Reusable prioritization we focus on those tasks whose partial or complete solutions will be most useful in the completion of multiple other tasks. This also might be thought of as a sub type of Highest-Information-Gain.
In Expected-Value prioritization we focus on those tasks that will result in potentially the biggest payoffs, presumably creating resources for engaging with other tasks. This might sound like the best until we realize we’ve only pushed the problem one level up the stack as we now need to juggle the fact that there are different sorts of value payoffs and our marginal utility for a given resource and ability to convert between different sorts of value might be changing over time.
Due to the well known effects of loss aversion, it’s also worth specifically naming a commonly encountered sub-type Expected-Loss prioritization, with catastrophization a further sub type focusing on chance of being wiped out (often over emphasized because of the Most-Salient consideration).
Many people default to a strategy of Delay and it is worth pointing out that conceptualizing this as simply some sort of character failure prevents us from identifying the benefit that this strategy provides. Namely, that it converts complex prioritization problems to simpler ones. Analysis of dependencies and choice of heuristics simplifies to ‘Who will be angry with me soonest if I don’t do X’ a sort of mix of First-Due and Most-Salient. Many of the problems people refer to in discussions of akrasia involve situations in which these strategies caused obvious problems that could have been alleviated by a different prioritization heuristic.
Within-task ambiguity
Ambiguity about individual tasks serves as additional activation energy needed to engage with that task. One easy way of thinking about this ambiguity is by asking of it all the journalist questions: who, what, where, why, how. To this we might add a couple less well known ones that are about additional kinds of specificity:
‘Which’ as a drill down step if the answers to any of our other questions are too general to be of use. ‘Who does this affect?’, ‘College students’, ‘Which?’ This points to us usually having some tacit-intuitive sense of the appropriate scope of a given task or sub-task and that this scope may or may not be well calibrated.
‘Whence’ (from what place?) as a sort of backwards facing ‘why’ accounting for ambiguity around where a task came from and whether we made our jobs harder when we stripped the task of that context in recording it.
See also the Specificity Sequence.
Goal-relevance ambiguity
Techniques like goal factoring are intended to collapse some of the complexity of prioritization by encouraging an investigation of how sub-tasks contribute to high level values and goals. I see three pieces here.
Task-Outcome ambiguity involves our lack of knowledge about what the real effects of completing a task will be.
Instrumental-Goal ambiguity deals with our lack of knowledge about how well our choice of proxy measures, including goals, will connect to our actual future preferences. An example of a dive into a slice of this region is the Goodhart Taxonomy.
Part-Whole Relation ambiguity deals with our lack of knowledge of the necessity/sufficiency conditions along the way of chaining from individual actions to longer term preference satisfaction.
Meta: Ambiguity about how to deal with ambiguity
A few different things here.
What are we even doing when we engage with ambiguity in prioritization? An example of one possible answer is that we are continually turning a partially ordered set of tasks into a more ordered set of tasks up to the limit of how much order we need for our ‘good enough’ heuristics to not face any catastrophic losses. There are probably other answers that illuminate different aspects of the problem.
Ambiguity about the correct level of abstraction to explore/exploit on. When trying to do our taxes, instead of getting anything done we might write a post about the structure of prioritization. :[
Risk aversion as different from uncertainty aversion. Feels like there’s potentially a lot to unpack there.
Motivational systems, whether rational, emotional, psychological, ethical, etc. as artificial constraints that make the size of the search space tractable.
Attacking ambiguity aversion directly as an emotional intervention. What is it we are afraid of when we avoid ambiguity and what is the positive thing that part is trying to get for us? There is likely much more here than just ‘cognition is expensive’ and this post itself could be seen as generating the space to forgive oneself for having failed in this way because the problem was much more complex than we might have given it credit for.
Ambiguity as a liquid that backs up into whatever system we install to manage it. Sure, you could deploy technique X that you learned to prioritize better (GTD! KonMarie! Eisenhower Matrices!) but that would be favoring the tasks you deploy them on over other tasks and there’s ambiguity on whether that’s a good idea. Related to ambiguity about the correct level to explore exploit on as well as Aether variables, bike shedding, and wastebasket taxons. i.e. Moving uncertainty around to hide it from ourselves when we don’t know how to deal with it.
Concrete takeaways
I said this was more a mapping exercise, but if people were to only take a couple things away from this post I think I’d want them to be:
1. Ask ‘Which’ and ‘Whence’ more often as a trailhead for all sorts of disambiguation.
2. Don’t write tasks vertically down the page, write them across the page so there’s room underneath to unroll the details of each task. You may object that this only gives you room for 3 or 4 tasks. This is a feature not a bug.
and finally,
This article is a stub, you can improve it by adding additional examples
- 24 Oct 2019 23:36 UTC; 4 points) 's comment on Schematic Thinking: heuristic generalization using Korzybski’s method by (
Related and strongly recommended: Research as a Stochastic Decision Process
^ This helped me make a big spreadsheet of personal project tasks… only to get bogged down in probability estimates and tasks-conditional-on-other-tasks and goals-that-I-didn’t-want-to-change-despite-evidence-that-they-would-be-surprisingly-hard-to-do.
Sooooo I ended up just using my spreadsheet as “to-do list, plus deadlines, plus probability guesses”. The final sorting was… basically just by deadline. And it didn’t really work for personal projects, so I just used it for schoolwork anyway.
(I was recommended the article later, and I do still think it’s useful, just not needle-movingly useful for my issues. And I read it before this post, so)
yeah, nice highlight on how reuse of partial results is probably a big chunk of how we do sample complexity reduction (and/or partitioning search spaces) via the granularity in our tacit reference class forecasting.
In particular, prioritization involves negotiation between self-parts with different beliefs/desires, which is a tricky kind of cognition. A suboptimal outcome of negotiation might look like the Delay strategy.
ah, good point. A part that doesn’t like the action another part wants to take might take delay as a second best worst acquiescence if it can’t get its way.
I think there is a critical piece which could be added about Satisficing over strategies. “Satisficing” would be giving a bit of thought to each priority structure and the tasks, finding the thing that is highest value based on the first pass, and just doing that without spending lots of time on meta-level concerns.
The above post also ignores heuristic approaches to VoI (Value of Information). I wrote about that here, but the short answer is that you should spend 5 seconds thinking about whether there is something that would inform your decision, and if there is and it’s very easy to get, you should get it, and if it’s hard to get but very valuable then you should read the post.
I agree that meta level concerns shouldn’t come in for the vast majority of decisions. I think I would claim that not spending some concentrated time on meta concerns (eg learning KonMarie) leaves one in ongoing confusion about akrasia.
Agreed.
Related: Expected Information/Value of Information/Most ambiguous
over emphasized/prioritized
added/fixed, thanks!
I’d like to see how different prioritization systems cater to different needs. I imagine systems like GTD or FVP to be best for people with many small tasks. But that’s about it. Anybody with a good overview of systems offering a guess?
I pattern-matched many of your between-task ambiguities to the different types of scheduling algorithms that can occur in operating systems.
Yes, the issue with schedulers is that they are much more optimized for splitting time between tasks that have chunking built in, which carries big overhead for humans.