Vagueness and gradations complicate the assignment of moral weights to potential moral patients and their interests.
I motivate and outline a general model of moral weights across possible moral patients to account for vagueness and gradations about what it means — the defining standards — to be a moral patient or realize a given kind of welfare on the given standard, like realizing certain functions, say self-awareness, to specific degrees of sophistication with a given precise method for calculating welfare (more).
I refine the model assuming maximizing expected choiceworthiness with intertheoretic comparisons of value and illustrate how subjective and poorly constrained expected moral weights can be on this model (more).
I both argue for and against the possibility that there are morally important standards humans don’t meet, and neither possibility seems extremely unlikely (more).
Acknowledgements
Thanks to Brian Tomasik and Bob Fischer for feedback. All errors are my own.
Gradations of consciousness
The Windows Task Manager may be a model of a computer’s own “attention”, and so an attention schema, and so be conscious to some degree, meeting a requirement of Attention Schema Theory (Tomasik, 2014-2017),[1] but you need to squint, and it seems at best only sort of true. Nonhuman animals fall somewhere between the Windows Task Manager and humans, having far more sophisticated versions of the capacities of Windows Task Manager and entire capacities it lacks entirely, but still, for most species, apparently lacking higher-order thoughts and self-narratives, which are typical of humans. We may also have different standards for what counts as a belief for the purpose of belief-like preferences or assign some special significance to the individual deciding how they judge their own life for global preferences (Plant, 2020).
There may be more than two degrees to which it can be the case that an animal or other system is conscious, has a (certain type of) welfare, or is otherwise a moral patient at all. There are multiple potentially defining features or functions — like higher-order thoughts, self-awareness of various kinds,[2] top-down attention control, beliefs, or effects on any of these — that may be independently present or absent or themselves realized to more than two degrees of sophistication or in different numbers. This would be like multiple dimensions of consciousness or moral patienthood and multiple gradations within each: multiple functions or capacities and multiple degrees of sophistication for each. For more references on and illustrations of gradualist accounts, see a previous piece and the following footnote.[3]
A graded model of welfare
For a given individual or system, it could therefore seem inappropriate to assign only a single number for welfare for any given type of welfare, whether felt desires, choice-based preferences, belief-like preferences or hedonic welfare. We could instead represent an individual’s realized welfare (or welfare range) not by a single number, but by a vector of them,
(x1,x2,…,xn),
one entry for each possible precise standard of consciousness or standard for that kind of welfare and its calculation.[4]
Each standard would be a combination of features and functions specified to a given degree, precisely enough (e.g. precisifications of vague or imprecise terms) and a precise method for assigning values to realized welfare, precise enough that we can assign one number unambiguously to the realized welfare if we knew all the facts about the system.[5] For example, x3 could be the welfare value under a global preference view for a given individual at a given point in time (or over their life) according to a standard requiring they be able to decide how to judge their own life (Plant, 2020), under specific minimum requirements for what it means to decide and what counts as a global preference, and using the (individual-relative) standard gamble method with a given default option to calculate x3. x4 could be the same, but under slightly higher minimum requirements for what it means to decide.x5 could be the same, but using a 1-10 life satisfaction scale instead of the standard gamble.x152 could be their felt desire welfare value according to a standard requiring an attention schema and felt desires, each to specified requirements and degrees of sophistication or complexity, and using some quantitative measure of motivational salience to calculate x152. x2731 could be their hedonic welfare value according to a standard requiring a global workspace and hedonic welfare, each to specified degrees of sophistication or complexity, e.g. the number, complexity and sophistication of the processes the global states are broadcasted to, along with some method for calculating x2731. And so on.
Then, we could compare the values entry-wise and say things like “X’s realized welfare is greater than Y’s realized welfare on some standard S,” and write xS>yS. And we could substitute the welfare range or capacity for welfare for realized welfare here.
When a given standard is otherwise not met for an individual or system, the corresponding welfare entry would have value 0 (or be empty). Humans plausibly realize more relevant features or functions and otherwise meet more standards than other animals, like more types and sophisticated versions of self-awareness, so other animals would get more 0s in their vectors than humans at any moment. The number of 0s would also, of course, vary across nonhuman animal species. And other animals could in principle meet some standards that humans don’t, so we could have some 0s in entries for which they have nonzero values.
Note that a standard could be met multiple times in a brain, and vary in number across brains, like if there are conscious subsystems (Fischer, Shriver & St. Jules, 2023), and the standard itself could include the method for individuating and counting them (some ideas here). The standards are (parts of) normative views, to be combined with others, including moral views, to form a more complete normative stance.
Weighing the standards
Moral (or generally normative) uncertainty about which standard to apply can be captured as uncertainty about which entry of the vector to use. However, it need not be the case that they’re mutually exclusive when they do apply, so we could value multiple theories of welfare simultaneously on one view, or value, say, hedonic welfare across multiple hedonic welfare standards simultaneously on another view.
We could apply some real-valued function f to the vectors, like
f(x1,x2,…,xn),
to obtain a moral value to aggregate and compare. A weighted sum with positive weights, like
a1∗x1+a2∗x2+⋯+an∗xn,
could follow from standard arguments or representation theorems for expected utility theory, resulting in maximizing expected choiceworthiness (MacAskill et al., 2020), or from an application of Harsanyi’s utilitarian theorem (like Beckstead & Thomas, 2023, section 6), or from arguments about the separability of value across standards (e.g. substituting standards for individuals in Theorem 3 of Blackorby et al., 2002 and section 5 of Thomas, 2022). Each coefficient could be the product of the probability with which we apply the standard (which can sum past 100%) and its weight conditional on applying.
However, any specific function or set of coefficients would (to me) require justification, and it’s unclear that there can be any good justification. This is effectively the problem of intertheoretic (reason) comparisons. There are probably no or very limited facts of the matter about how to weigh standards against one another. We could find some standards extremely improbable, but still assign them basically any relative moral weight compared to relatively probable ones anyway. For example, if we pick a common standard to compare the others to, it seems arbitrary which and it can have enormous influence under uncertainty about the relative moral weight of the standards (Tomasik, 2013-2018, Karnofsky, 2018). We can’t avoid the problem by just being uncertain about which common standard to use.
In that case, to weigh across groups of standards with no common scale, another approach to moral uncertainty that doesn’t depend on such arbitrary but important choices may be preferred, like Open Philanthropy’s worldview diversification approach (Karnofsky, 2018), variance voting (MacAskill et al., 2020), moral parliaments (Newberry & Ord, 2021), a bargain-theoretic approach (Greaves & Cotton-Barratt, 2019), or the Property Rights Approach (Lloyd, 2022). In my view, all of these alternatives are superior because they don’t depend on such apparently arbitrary weights[6] and are less fanatical, despite the problems with them or cases for weighing across standards based on norms of instrumental rationality. For more discussion of these issues and moral uncertainty, see MacAskill et al., 2020.
Credences across standards may barely constrain relative moral weights
Suppose chickens and humans respectively met 80% and 100% of the standards by credence, with all uncertainty coming from which standard to apply, not uncertainty about whether they met any particular standard. Suppose further that for each standard chickens met, they had the same welfare range as humans. It wouldn’t follow that their expected moral weights are 80% as large as humans’.[7] In fact, it could be arbitrarily close to 0, or arbitrarily close to 100% of humans’.
We could give chickens tiny expected moral weights relative to humans, in case humans get almost all of ours from the other 20% of standards chickens don’t meet. For example, consider two standards, labeled 1 and 2, and we only apply standard 2 20% of the time, and we apply standard 1 the other 80% of the time.[8] Suppose chickens don’t meet standard 2, but otherwise have the same welfare ranges as humans. Using c’s for chickens and h’s for humans to define their welfare ranges, c1=h1=1, but c2=0 and h2=1. Then, it could be the case that a1, which reflects the 80% and the moral weighting factor for standard 1, b1, is much smaller than a2, which reflects the 20% and the weight factor for standard 2, b2. Say a1=0.8∗b1=1 and a2=0.2∗b2=1,000,000. Then, the expected moral weight of the average chicken would be
a1∗c1+a2∗c2=1,
while the expected moral weight of the average human would be
a1∗h1+a2∗h2=1,000,001,
far greater than the chicken’s.
Note however that we’ve assumed that chickens meet standard 2 with probability exactly 0. With a probability p > 1⁄1,000,000, the expected moral weight of the average chicken would be about p times that of the human, so, for example, even a 1% probability would bring the chicken within 1% of the human. Given the other assumptions, unless we are extremely confident that the average chickens doesn’t meet a given standard, we could bound their expected moral weight as a non-negligible ratio of the average human’s.
Alternatively, assuming again that chickens meet 80% of the standards by credence, and we have no uncertainty about which, we could give the average chicken >99.99% of the expected moral weight of the average human, in case the other 20% of standards have very little weight, e.g. with a1=0.8∗b1=1,000,000 and a2=0.2∗b2=1.
And if we took b1=b2=b>0, then the average chicken would have 80% of the expected moral weight of the average human:
0.8∗b∗c1+0.2∗b∗c2=0.8∗b
vs
0.8∗b∗h1+0.2∗b∗h2=0.8∗b+0.2∗b=b
Or, if we thought for each standard, chickens were 80% likely to meet it,[9] and conditional on meeting it, have the same welfare ranges as humans, then the average chicken’s expected moral weight would again be 80% of the average human’s:
E[C]=a1∗E[C1]+a2∗E[C2]+…+an∗E[Cn]
=a1∗0.8∗E[H1]+a2∗0.8∗E[H2]+⋯+an∗0.8∗E[Hn]
=0.8∗(a1∗E[H1]+a2∗E[H2]+⋯+an∗E[Hn])
=0.8∗E[H]
In the above examples, we can roughly think of a2 like the additional weight above and beyond that already granted by a1. Other discussions sometimes weight by some increasing function of degree of sophistication or complexity, sometimes an increasing function of the number of neurons (Tomasik, 2013-2018, Shulman, 2015 and Tomasik, 2016-2017). To capture this as a special case in my general weighted sum model, the coefficients I use would be the derivatives (or finite differences) of such a function.[10] For example, both humans and chickens meet the standard of suffering with at least 100 million neurons, while humans meet the standard of suffering with at least 80 billion neurons, but chickens don’t (List of animals by number of neurons—Wikipedia). Humans would get extra weight for meeting the standard for having at least 80 billion neurons. And we can imagine other possible moral patients with even more neurons meeting higher standards that humans don’t.
Are there morally important standards humans don’t meet?
However, are there really any morally relevant standards that humans don’t meet worth granting much or any weight?
The main reason I separate all of the standards is because we haven’t settled on what exactly welfare, consciousness or moral patienthood mean, or what they mean seems vague. For many animals or other systems, it will be unclear if they’re moral patients or the degree to which they are moral patients in part because of this uncertainty or vagueness about the standards themselves, not just about what the animals are actually doing. On the other hand, it doesn’t seem uncertain or vague whether or not you — the reader — can suffer, because we’re defining what it means to suffer in reference to humans. Adding more neurons or functions or complexity to your brain wouldn’t make it more true that you can suffer, or experience any kind of welfare already accessible to you. This would be like asking whether something is more like water than (liquid) H2O is water.[11] We’ve settled on defining water as (liquid) H2O. And we’ve settled on defining suffering in reference to humans, but we’re still working out the details. Our revisions will not exclude humans.
And when I consider chickens, fishes, insects, C. elegans or the Windows Task Manager, my reservations in attributing suffering to them are not because they just need more neurons doing the same things they’re already doing to make it more true that they can suffer. It’s because for some functions that turn out to be necessary for suffering, they either don’t realize them or you’d need to squint to say that they do. It could be only sort of true that they suffer. Neuron counts may serve as an indirect proxy for degrees to which the relevant functions are realized, but I’m skeptical that we should give much weight to standards directly specifying neuron counts, although I’m not sure enough to specify negligible credence and weight to this possibility.[12]
And if we’re weighing across standards (which we may choose not to do), a weighted sum of functions mapping systems to their welfare ranges will increase with neuron counts if any of them are more likely to apply with more neurons and none are less likely to apply with more neurons (or those that are less likely to apply are outweighed by those that are more likely to apply with more neurons). This is true no matter how little weight we give to such standards, as long as it’s nonzero, although their influence will of course decrease with decreasing weight. Even if we aren’t taking a weighted sum, we’d have more reason to prioritize those with more neurons, all else equal, on most plausible approaches to normative uncertainty.
It could also be the case that there are different kinds of welfare inaccessible to humans that could be accessible to other animals, aliens or artificial systems, in case they realize some important different functions that we don’t or that we only realize vaguely. We might imagine a type of welfare beyond those I’ve previously outlined, other than hedonic states, felt desires, belief-like preferences and choice-based preferences. Or, we might imagine aliens or AI more sophisticated than us with more sophisticated versions of them, like their own concept of a-suffering, to which they assign moral value. They could assess the degree to which humans can a-suffer at all and find it vague. I suspect this shouldn’t look like them just having more neurons doing the same things in their brains than us or even realizing the same functions more often,[13] but the arguments don’t seem strong enough to outright dismiss the possibility.
So, it seems we should grant the possibility that their a-suffering is more important than our suffering. On the other hand, perhaps they only find it vague whether we a-suffer or that there are worthy standards we don’t meet because they don’t have access to our mental states or other valuable mental states that fall short of their concept of a-suffering. If they did, they would directly compare and trade off their a-suffering and human-like suffering intrapersonally without having to weigh them according to different standards. Similarly, if we had access to their a-suffering, we would do the same. We would all then realize there are no important standards humans don’t meet, and we should dismiss such standards now. This seems reasonably likely, but I’m not overwhelmingly confident in it.
The right response here may be to assign at least a non-negligible probability — e.g. at least 1% — to the possibility that there are important standards other beings can meet but humans don’t. And 1% could be too small.
Again, none of this is to say that other beings can’t have greater moral weight even if the only valuable standards are standards humans meet. Some beings may have greater welfare ranges or meet the same standards more times than us, whether at a time or over time — through longer lives or meeting the standards more often.
To better weigh standards humans typically meet but other animals don’t, we can investigate humans with the relevant functions disrupted, whether permanently by brain lesions (brain damage) or temporarily with drugs or transcranial magnetic stimulation (Bolognini & Ro, 2010), although there may be unavoidable confounding with capacities for report between humans and nonlinguistic animals. We could imagine aliens or AI doing the same for standards they meet but we don’t, and without confounding with capacities for report.
It’s not clear whether the proponents of Attention Schema Theory (AST) would accept the Windows Task Manager as conscious to any degree, because it’s not the fact of just having an attention schema itself that makes a system conscious under AST. Graziano (2020) writes:
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
However, about an experiment with a very simple artificial agent with an attention model trained with reinforcement learning, Wilterson and Graziano (2021) write:
Given all of that discussion, does the agent have consciousness or not? Yes and no. It has a simple version of some of the information that, in humans, may lead to our belief that we have subjective consciousness. In that sense, one could say the agent has a simple form of consciousness, without cognitive complexity or the ability to verbally report. Our hope here is not to make an inflated claim that our artificial agent is conscious, but to deflate the mystique of consciousness to the status of information and cognitive operations.
So, we could substitute this agent instead for the illustration, but the Windows Task Manager is probably more familiar to the average reader.
I find Dennett particularly illustrative. About Dennett, Rothman (2017) writes:
He regards the zombie problem as a typically philosophical waste of time. The problem presupposes that consciousness is like a light switch: either an animal has a self or it doesn’t. But Dennett thinks these things are like evolution, essentially gradualist, without hard borders. The obvious answer to the question of whether animals have selves is that they sort of have them. He loves the phrase “sort of.” Picture the brain, he often says, as a collection of subsystems that “sort of” know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they “sort of” have consciousness, as measured by human standards.
In this panel discussion, Dennett seemed confident that chickens and octopuses are conscious, directly answering that they are without reservation, and “yes” to bees being conscious after hesitation. For bees, he said “consciousness isn’t an all-or-nothing thing”, “it’s not the light is on or the light is off,” and “It’s not a question of my confidence, it’s a question of how minimal the consciousness is, and whether it deserves to be called consciousness at all.”
To appreciate what I see to be Chalmers’ second contribution, we first need to distinguish two different illusions: the malignant theorists’ illusion and the benign user illusion. Chalmers almost does that. He asserts: ‘To generate the hard problem of consciousness, all we need is the basic fact that there is something it is like to be us’ (2018, p. 49). No, all we need is the fact that we think there is something it is like to be us. Dogs presumably do not think there is something it is like to be them, even if there is. It is not that a dog thinks there isn’t anything it is like to be a dog; the dog is not a theorist at all, and hence does not suffer from the theorists’ illusion. The hard problem and meta-problem are only problems for us humans, and mainly just for those of us humans who are particularly reflective. In other words, dogs aren’t bothered or botherable by problem intuitions. Dogs — and, for that matter, clams and ticks and bacteria — do enjoy (or at any rate do not suffer from) a sort of user illusion: they are equipped to discriminate and track only some of the properties in their environment.
On the other hand, Dennett (2018, p.168-169) also wrote:
I have long stressed the fact that human consciousness is vastly different from the consciousness of any other species, such as apes, dolphins, and dogs, and this “human exceptionalism” has been met with little favor by my fellow consciousness theorists.
and
“Thoughts are expressible in speech,” he writes (p. 155), but what about the higher-order thoughts of conscious animals? Are they? They are not expressed in speech, and I submit that it is a kind of wishful thinking to fill the minds of our dogs with thoughts of that sophistication. So I express my gratitude to Rosenthal for his clarifying account by paying him back with a challenge: how would he establish that non-speaking animals have higher-order thoughts worthy of the name? Or does he agree with me that the anchoring concept of consciousness, human consciousness, is hugely richer than animal consciousness on just this dimension?
I have claimed that the differences between human consciousness and the consciousness of any other species are so great that they hardly deserve to be called variations of a single phenomenon. My notorious proposal has been that human consciousness is to animal consciousness roughly as human language is to birdsong; birdsong is communicative, but calling it a language is seriously misleading.
There could be infinitely many of them, too, and possibly even a continuum of them, say denoting degrees of sophistication with real numbers or vectors of them.
However, there may still be arbitrariness involved, e.g. in assigning credences across views, and that may be inevitable, but, all else equal, it seems better to have better justified views and less arbitrariness than otherwise.
Also, it could turn out that some weights can be justified, if only between subsets of standards even if not all together. However, I’m skeptical.
For a more formal and abstract statement: Even if xS=yS in welfare ranges for the standard S and xT=yT for the standard T, xS and xT could get vastly different moral weights. Similarly, yS and yT could get vastly different moral weights, and so could S and T generally. Specifically, in the weighted sum, aT could be far larger than aS.
To be clear, I don’t find this plausible. There will be some that chickens will almost definitely meet, some that chickens will almost definitely not meet, and many about which we should be more uncertain either way.
where [P]=1 if P is true and 0 otherwise, as in Iverson bracket notation. It’s a telescoping series, and all terms cancel except for f(N) (and f(0)=0). The standard Sn is met if and only if N≥n, and an=f(n)−f(n−1) is the weight to the standard.
Quantities of water can differ, but that’s more like the welfare range or number of moral patients according to given standards. We might also say some water is more valuable than other water due to its form, e.g. ice vs liquid water.
It could also be the case that welfare ranges themselves or the number of moral patients on a given standard do scale with the number of neurons, e.g. bigger brains might actually realize greater intensities of welfare (against this, see Shriver, 2022 and Mathers, 2022), have more subsystems that can realize welfare simultaneously that should be counted and added (Fischer, Shriver & St. Jules, 2023), or otherwise realize morally important functions more often than smaller brains. This is a separate issue from which standards are met at all.
Gradations of moral weight
Summary
Vagueness and gradations complicate the assignment of moral weights to potential moral patients and their interests.
I motivate and outline a general model of moral weights across possible moral patients to account for vagueness and gradations about what it means — the defining standards — to be a moral patient or realize a given kind of welfare on the given standard, like realizing certain functions, say self-awareness, to specific degrees of sophistication with a given precise method for calculating welfare (more).
I refine the model assuming maximizing expected choiceworthiness with intertheoretic comparisons of value and illustrate how subjective and poorly constrained expected moral weights can be on this model (more).
I both argue for and against the possibility that there are morally important standards humans don’t meet, and neither possibility seems extremely unlikely (more).
Acknowledgements
Thanks to Brian Tomasik and Bob Fischer for feedback. All errors are my own.
Gradations of consciousness
The Windows Task Manager may be a model of a computer’s own “attention”, and so an attention schema, and so be conscious to some degree, meeting a requirement of Attention Schema Theory (Tomasik, 2014-2017),[1] but you need to squint, and it seems at best only sort of true. Nonhuman animals fall somewhere between the Windows Task Manager and humans, having far more sophisticated versions of the capacities of Windows Task Manager and entire capacities it lacks entirely, but still, for most species, apparently lacking higher-order thoughts and self-narratives, which are typical of humans. We may also have different standards for what counts as a belief for the purpose of belief-like preferences or assign some special significance to the individual deciding how they judge their own life for global preferences (Plant, 2020).
There may be more than two degrees to which it can be the case that an animal or other system is conscious, has a (certain type of) welfare, or is otherwise a moral patient at all. There are multiple potentially defining features or functions — like higher-order thoughts, self-awareness of various kinds,[2] top-down attention control, beliefs, or effects on any of these — that may be independently present or absent or themselves realized to more than two degrees of sophistication or in different numbers. This would be like multiple dimensions of consciousness or moral patienthood and multiple gradations within each: multiple functions or capacities and multiple degrees of sophistication for each. For more references on and illustrations of gradualist accounts, see a previous piece and the following footnote.[3]
A graded model of welfare
For a given individual or system, it could therefore seem inappropriate to assign only a single number for welfare for any given type of welfare, whether felt desires, choice-based preferences, belief-like preferences or hedonic welfare. We could instead represent an individual’s realized welfare (or welfare range) not by a single number, but by a vector of them,
(x1,x2,…,xn),one entry for each possible precise standard of consciousness or standard for that kind of welfare and its calculation.[4]
Each standard would be a combination of features and functions specified to a given degree, precisely enough (e.g. precisifications of vague or imprecise terms) and a precise method for assigning values to realized welfare, precise enough that we can assign one number unambiguously to the realized welfare if we knew all the facts about the system.[5] For example, x3 could be the welfare value under a global preference view for a given individual at a given point in time (or over their life) according to a standard requiring they be able to decide how to judge their own life (Plant, 2020), under specific minimum requirements for what it means to decide and what counts as a global preference, and using the (individual-relative) standard gamble method with a given default option to calculate x3. x4 could be the same, but under slightly higher minimum requirements for what it means to decide.x5 could be the same, but using a 1-10 life satisfaction scale instead of the standard gamble.x152 could be their felt desire welfare value according to a standard requiring an attention schema and felt desires, each to specified requirements and degrees of sophistication or complexity, and using some quantitative measure of motivational salience to calculate x152. x2731 could be their hedonic welfare value according to a standard requiring a global workspace and hedonic welfare, each to specified degrees of sophistication or complexity, e.g. the number, complexity and sophistication of the processes the global states are broadcasted to, along with some method for calculating x2731. And so on.
Then, we could compare the values entry-wise and say things like “X’s realized welfare is greater than Y’s realized welfare on some standard S,” and write xS>yS. And we could substitute the welfare range or capacity for welfare for realized welfare here.
When a given standard is otherwise not met for an individual or system, the corresponding welfare entry would have value 0 (or be empty). Humans plausibly realize more relevant features or functions and otherwise meet more standards than other animals, like more types and sophisticated versions of self-awareness, so other animals would get more 0s in their vectors than humans at any moment. The number of 0s would also, of course, vary across nonhuman animal species. And other animals could in principle meet some standards that humans don’t, so we could have some 0s in entries for which they have nonzero values.
Note that a standard could be met multiple times in a brain, and vary in number across brains, like if there are conscious subsystems (Fischer, Shriver & St. Jules, 2023), and the standard itself could include the method for individuating and counting them (some ideas here). The standards are (parts of) normative views, to be combined with others, including moral views, to form a more complete normative stance.
Weighing the standards
Moral (or generally normative) uncertainty about which standard to apply can be captured as uncertainty about which entry of the vector to use. However, it need not be the case that they’re mutually exclusive when they do apply, so we could value multiple theories of welfare simultaneously on one view, or value, say, hedonic welfare across multiple hedonic welfare standards simultaneously on another view.
We could apply some real-valued function f to the vectors, like
f(x1,x2,…,xn),to obtain a moral value to aggregate and compare. A weighted sum with positive weights, like
a1∗x1+a2∗x2+⋯+an∗xn,could follow from standard arguments or representation theorems for expected utility theory, resulting in maximizing expected choiceworthiness (MacAskill et al., 2020), or from an application of Harsanyi’s utilitarian theorem (like Beckstead & Thomas, 2023, section 6), or from arguments about the separability of value across standards (e.g. substituting standards for individuals in Theorem 3 of Blackorby et al., 2002 and section 5 of Thomas, 2022). Each coefficient could be the product of the probability with which we apply the standard (which can sum past 100%) and its weight conditional on applying.
However, any specific function or set of coefficients would (to me) require justification, and it’s unclear that there can be any good justification. This is effectively the problem of intertheoretic (reason) comparisons. There are probably no or very limited facts of the matter about how to weigh standards against one another. We could find some standards extremely improbable, but still assign them basically any relative moral weight compared to relatively probable ones anyway. For example, if we pick a common standard to compare the others to, it seems arbitrary which and it can have enormous influence under uncertainty about the relative moral weight of the standards (Tomasik, 2013-2018, Karnofsky, 2018). We can’t avoid the problem by just being uncertain about which common standard to use.
I argued in a previous piece for weighing relative to the still vague and uncertain human-relative standard, which we could model as a random variable over the standards. However, there are actually also multiple distinct such vague and uncertain human-relative standards, not just one, so we’d still have the problem of weighing across them.
In that case, to weigh across groups of standards with no common scale, another approach to moral uncertainty that doesn’t depend on such arbitrary but important choices may be preferred, like Open Philanthropy’s worldview diversification approach (Karnofsky, 2018), variance voting (MacAskill et al., 2020), moral parliaments (Newberry & Ord, 2021), a bargain-theoretic approach (Greaves & Cotton-Barratt, 2019), or the Property Rights Approach (Lloyd, 2022). In my view, all of these alternatives are superior because they don’t depend on such apparently arbitrary weights[6] and are less fanatical, despite the problems with them or cases for weighing across standards based on norms of instrumental rationality. For more discussion of these issues and moral uncertainty, see MacAskill et al., 2020.
Credences across standards may barely constrain relative moral weights
Suppose chickens and humans respectively met 80% and 100% of the standards by credence, with all uncertainty coming from which standard to apply, not uncertainty about whether they met any particular standard. Suppose further that for each standard chickens met, they had the same welfare range as humans. It wouldn’t follow that their expected moral weights are 80% as large as humans’.[7] In fact, it could be arbitrarily close to 0, or arbitrarily close to 100% of humans’.
We could give chickens tiny expected moral weights relative to humans, in case humans get almost all of ours from the other 20% of standards chickens don’t meet. For example, consider two standards, labeled 1 and 2, and we only apply standard 2 20% of the time, and we apply standard 1 the other 80% of the time.[8] Suppose chickens don’t meet standard 2, but otherwise have the same welfare ranges as humans. Using c’s for chickens and h’s for humans to define their welfare ranges, c1=h1=1, but c2=0 and h2=1. Then, it could be the case that a1, which reflects the 80% and the moral weighting factor for standard 1, b1, is much smaller than a2, which reflects the 20% and the weight factor for standard 2, b2. Say a1=0.8∗b1=1 and a2=0.2∗b2=1,000,000. Then, the expected moral weight of the average chicken would be
a1∗c1+a2∗c2=1,while the expected moral weight of the average human would be
a1∗h1+a2∗h2=1,000,001,far greater than the chicken’s.
Note however that we’ve assumed that chickens meet standard 2 with probability exactly 0. With a probability p > 1⁄1,000,000, the expected moral weight of the average chicken would be about p times that of the human, so, for example, even a 1% probability would bring the chicken within 1% of the human. Given the other assumptions, unless we are extremely confident that the average chickens doesn’t meet a given standard, we could bound their expected moral weight as a non-negligible ratio of the average human’s.
Alternatively, assuming again that chickens meet 80% of the standards by credence, and we have no uncertainty about which, we could give the average chicken >99.99% of the expected moral weight of the average human, in case the other 20% of standards have very little weight, e.g. with a1=0.8∗b1=1,000,000 and a2=0.2∗b2=1.
And if we took b1=b2=b>0, then the average chicken would have 80% of the expected moral weight of the average human:
0.8∗b∗c1+0.2∗b∗c2=0.8∗bvs
0.8∗b∗h1+0.2∗b∗h2=0.8∗b+0.2∗b=bOr, if we thought for each standard, chickens were 80% likely to meet it,[9] and conditional on meeting it, have the same welfare ranges as humans, then the average chicken’s expected moral weight would again be 80% of the average human’s:
E[C]=a1∗E[C1]+a2∗E[C2]+…+an∗E[Cn]
=a1∗0.8∗E[H1]+a2∗0.8∗E[H2]+⋯+an∗0.8∗E[Hn]
=0.8∗(a1∗E[H1]+a2∗E[H2]+⋯+an∗E[Hn])
=0.8∗E[H]
In the above examples, we can roughly think of a2 like the additional weight above and beyond that already granted by a1. Other discussions sometimes weight by some increasing function of degree of sophistication or complexity, sometimes an increasing function of the number of neurons (Tomasik, 2013-2018, Shulman, 2015 and Tomasik, 2016-2017). To capture this as a special case in my general weighted sum model, the coefficients I use would be the derivatives (or finite differences) of such a function.[10] For example, both humans and chickens meet the standard of suffering with at least 100 million neurons, while humans meet the standard of suffering with at least 80 billion neurons, but chickens don’t (List of animals by number of neurons—Wikipedia). Humans would get extra weight for meeting the standard for having at least 80 billion neurons. And we can imagine other possible moral patients with even more neurons meeting higher standards that humans don’t.
Are there morally important standards humans don’t meet?
(This section builds on and is inspired by this comment by David Mathers.)
However, are there really any morally relevant standards that humans don’t meet worth granting much or any weight?
The main reason I separate all of the standards is because we haven’t settled on what exactly welfare, consciousness or moral patienthood mean, or what they mean seems vague. For many animals or other systems, it will be unclear if they’re moral patients or the degree to which they are moral patients in part because of this uncertainty or vagueness about the standards themselves, not just about what the animals are actually doing. On the other hand, it doesn’t seem uncertain or vague whether or not you — the reader — can suffer, because we’re defining what it means to suffer in reference to humans. Adding more neurons or functions or complexity to your brain wouldn’t make it more true that you can suffer, or experience any kind of welfare already accessible to you. This would be like asking whether something is more like water than (liquid) H2O is water.[11] We’ve settled on defining water as (liquid) H2O. And we’ve settled on defining suffering in reference to humans, but we’re still working out the details. Our revisions will not exclude humans.
And when I consider chickens, fishes, insects, C. elegans or the Windows Task Manager, my reservations in attributing suffering to them are not because they just need more neurons doing the same things they’re already doing to make it more true that they can suffer. It’s because for some functions that turn out to be necessary for suffering, they either don’t realize them or you’d need to squint to say that they do. It could be only sort of true that they suffer. Neuron counts may serve as an indirect proxy for degrees to which the relevant functions are realized, but I’m skeptical that we should give much weight to standards directly specifying neuron counts, although I’m not sure enough to specify negligible credence and weight to this possibility.[12]
And if we’re weighing across standards (which we may choose not to do), a weighted sum of functions mapping systems to their welfare ranges will increase with neuron counts if any of them are more likely to apply with more neurons and none are less likely to apply with more neurons (or those that are less likely to apply are outweighed by those that are more likely to apply with more neurons). This is true no matter how little weight we give to such standards, as long as it’s nonzero, although their influence will of course decrease with decreasing weight. Even if we aren’t taking a weighted sum, we’d have more reason to prioritize those with more neurons, all else equal, on most plausible approaches to normative uncertainty.
It could also be the case that there are different kinds of welfare inaccessible to humans that could be accessible to other animals, aliens or artificial systems, in case they realize some important different functions that we don’t or that we only realize vaguely. We might imagine a type of welfare beyond those I’ve previously outlined, other than hedonic states, felt desires, belief-like preferences and choice-based preferences. Or, we might imagine aliens or AI more sophisticated than us with more sophisticated versions of them, like their own concept of a-suffering, to which they assign moral value. They could assess the degree to which humans can a-suffer at all and find it vague. I suspect this shouldn’t look like them just having more neurons doing the same things in their brains than us or even realizing the same functions more often,[13] but the arguments don’t seem strong enough to outright dismiss the possibility.
So, it seems we should grant the possibility that their a-suffering is more important than our suffering. On the other hand, perhaps they only find it vague whether we a-suffer or that there are worthy standards we don’t meet because they don’t have access to our mental states or other valuable mental states that fall short of their concept of a-suffering. If they did, they would directly compare and trade off their a-suffering and human-like suffering intrapersonally without having to weigh them according to different standards. Similarly, if we had access to their a-suffering, we would do the same. We would all then realize there are no important standards humans don’t meet, and we should dismiss such standards now. This seems reasonably likely, but I’m not overwhelmingly confident in it.
The right response here may be to assign at least a non-negligible probability — e.g. at least 1% — to the possibility that there are important standards other beings can meet but humans don’t. And 1% could be too small.
Again, none of this is to say that other beings can’t have greater moral weight even if the only valuable standards are standards humans meet. Some beings may have greater welfare ranges or meet the same standards more times than us, whether at a time or over time — through longer lives or meeting the standards more often.
To better weigh standards humans typically meet but other animals don’t, we can investigate humans with the relevant functions disrupted, whether permanently by brain lesions (brain damage) or temporarily with drugs or transcranial magnetic stimulation (Bolognini & Ro, 2010), although there may be unavoidable confounding with capacities for report between humans and nonlinguistic animals. We could imagine aliens or AI doing the same for standards they meet but we don’t, and without confounding with capacities for report.
It’s not clear whether the proponents of Attention Schema Theory (AST) would accept the Windows Task Manager as conscious to any degree, because it’s not the fact of just having an attention schema itself that makes a system conscious under AST. Graziano (2020) writes:
However, about an experiment with a very simple artificial agent with an attention model trained with reinforcement learning, Wilterson and Graziano (2021) write:
So, we could substitute this agent instead for the illustration, but the Windows Task Manager is probably more familiar to the average reader.
DeGrazia (2019, 2012) distinguishes bodily agential self-awareness, introspective self-awareness, narrative self-awareness and social self-awareness.
For more on gradualist accounts, in particular on illusionism or antirealism about consciousness or authors endorsing such positions, see discussion by Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (2018, p.168-169, 2019, 2021, 1:16:30-1:18:00, Rothman, 2017), Dung (2022) and Wilterson & Graziano, 2021. Godfrey-Smith (2020, 2023) also supports gradualism, and Birch (2020, pp.33-34) proposes the possibility of graded versions of Global Workspace Theory.
I find Dennett particularly illustrative. About Dennett, Rothman (2017) writes:
In this panel discussion, Dennett seemed confident that chickens and octopuses are conscious, directly answering that they are without reservation, and “yes” to bees being conscious after hesitation. For bees, he said “consciousness isn’t an all-or-nothing thing”, “it’s not the light is on or the light is off,” and “It’s not a question of my confidence, it’s a question of how minimal the consciousness is, and whether it deserves to be called consciousness at all.”
And again, Dennett (2019):
On the other hand, Dennett (2018, p.168-169) also wrote:
and
And again in 2020, p.4:
There could be infinitely many of them, too, and possibly even a continuum of them, say denoting degrees of sophistication with real numbers or vectors of them.
Or a distribution or expected value, accounting for our quantified uncertainty about the system itself.
However, there may still be arbitrariness involved, e.g. in assigning credences across views, and that may be inevitable, but, all else equal, it seems better to have better justified views and less arbitrariness than otherwise.
Also, it could turn out that some weights can be justified, if only between subsets of standards even if not all together. However, I’m skeptical.
For a more formal and abstract statement: Even if xS=yS in welfare ranges for the standard S and xT=yT for the standard T, xS and xT could get vastly different moral weights. Similarly, yS and yT could get vastly different moral weights, and so could S and T generally. Specifically, in the weighted sum, aT could be far larger than aS.
Or more, if not mutually exclusive.
To be clear, I don’t find this plausible. There will be some that chickens will almost definitely meet, some that chickens will almost definitely not meet, and many about which we should be more uncertain either way.
Suppose the moral weight function is f, a function of the number of neurons, N. Then, assuming f(0)=0,
f(N)=∞∑n=1[N≥n]∗(f(n)−f(n−1))=∞∑n=1[the system meets standard Sn]∗an,where [P]=1 if P is true and 0 otherwise, as in Iverson bracket notation. It’s a telescoping series, and all terms cancel except for f(N) (and f(0)=0). The standard Sn is met if and only if N≥n, and an=f(n)−f(n−1) is the weight to the standard.
Quantities of water can differ, but that’s more like the welfare range or number of moral patients according to given standards. We might also say some water is more valuable than other water due to its form, e.g. ice vs liquid water.
It could also be the case that welfare ranges themselves or the number of moral patients on a given standard do scale with the number of neurons, e.g. bigger brains might actually realize greater intensities of welfare (against this, see Shriver, 2022 and Mathers, 2022), have more subsystems that can realize welfare simultaneously that should be counted and added (Fischer, Shriver & St. Jules, 2023), or otherwise realize morally important functions more often than smaller brains. This is a separate issue from which standards are met at all.
But those could matter for welfare ranges or the number of times the standards are met, as above.