My first draft of this article included a formal definition of willpower, which I cut out because it didn’t really fit.
Since willpower is a purely mental construct, we have to define it relative to some psychological model. Unfortunately, we have many imprecise and conflicting models of how minds are structured, and they’re probably all at least somewhat wrong. Rather than define willpower in terms of a model that will probably be disproven, we should define it in a way that can be applied to any psychological models—even models of animals, aliens or AIs. We also need to be careful to separate the definition from empirical observations, such as the observation that willpower tends to act like a fungible resource, tends to fail all at once rather than gradually, because these observations might not always hold.
For the sleep-willpower example, our model needs at least two parts: one which provides defaults, and one which overrides them. We’ll call the defaults system 1, and we’ll call the override system 2. Now consider a complete model of minds, with many parts that have various degrees of influence over each other. Use some centrality measure to designate one of these parts as the center. Most reasonable measures should put the center somewhere inside what we call “higher reasoning”, since that’s the one part of the human mind that can most directly influence every other part. Now take any pair of elements in the model which act in opposition to each other. Whichever is farther from the center, is system 1; whichever is closer to the center, is system 2; and whatever influence system 2 has over system 1, we call willpower. In general, we use “system 1“ to mean “things that are far from the center”, and “system 2” to mean “things that are close to the center”. Because different models may place things differently, whenever we use the terms system 1 and system 2 we assert that all reasonable models should agree on which is which.
My first draft of this article included a formal definition of willpower, which I cut out because it didn’t really fit.
Since willpower is a purely mental construct, we have to define it relative to some psychological model. Unfortunately, we have many imprecise and conflicting models of how minds are structured, and they’re probably all at least somewhat wrong. Rather than define willpower in terms of a model that will probably be disproven, we should define it in a way that can be applied to any psychological models—even models of animals, aliens or AIs. We also need to be careful to separate the definition from empirical observations, such as the observation that willpower tends to act like a fungible resource, tends to fail all at once rather than gradually, because these observations might not always hold.
For the sleep-willpower example, our model needs at least two parts: one which provides defaults, and one which overrides them. We’ll call the defaults system 1, and we’ll call the override system 2. Now consider a complete model of minds, with many parts that have various degrees of influence over each other. Use some centrality measure to designate one of these parts as the center. Most reasonable measures should put the center somewhere inside what we call “higher reasoning”, since that’s the one part of the human mind that can most directly influence every other part. Now take any pair of elements in the model which act in opposition to each other. Whichever is farther from the center, is system 1; whichever is closer to the center, is system 2; and whatever influence system 2 has over system 1, we call willpower. In general, we use “system 1“ to mean “things that are far from the center”, and “system 2” to mean “things that are close to the center”. Because different models may place things differently, whenever we use the terms system 1 and system 2 we assert that all reasonable models should agree on which is which.