“Arbitrary”
Followup to: Inseparably Right; or, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
One of the experiences of following the Way is that, from time to time, you notice a new word that you have been using without really understanding. And you say: “What does this word, ‘X’, really mean?”
Perhaps ‘X’ is ‘error’, for example. And those who have not yet realized the importance of this aspect of the Way, may reply: “Huh? What do you mean? Everyone knows what an ‘error’ is; it’s when you get something wrong, when you make a mistake.” And you reply, “But those are only synonyms; what can the term ‘error’ mean in a universe where particles only ever do what they do?”
It’s not meant to be a rhetorical question; you’re meant to go out and answer it. One of the primary tools for doing so is Rationalist’s Taboo, when you try to speak without using the word or its synonyms—to replace the symbol with the substance.
So I ask you therefore, what is this word “arbitrary”? Is a rock arbitrary? A leaf? A human?
How about sorting pebbles into prime-numbered heaps? How about maximizing inclusive genetic fitness? How about dragging a child off the train tracks?
How can I tell exactly which things are arbitrary, and which not, in this universe where particles only ever do what they do? Can you tell me exactly what property is being discriminated, without using the word “arbitrary” or any direct synonyms? Can you open up the box of “arbitrary”, this label that your mind assigns to some things and not others, and tell me what kind of algorithm is at work here?
Having pondered this issue myself, I offer to you the following proposal:
A piece of cognitive content feels “arbitrary” if it is the kind of cognitive content that we expect to come with attached justifications, and those justifications are not present in our mind.
You’ll note that I’ve performed the standard operation for guaranteeing that a potentially confusing question has a real answer: I substituted the question, “How does my brain label things ‘arbitrary’?” for “What is this mysterious property of arbitrariness?” This is not necessarily a sleight-of-hand, since to explain something is not the same as explaining it away.
In this case, for nearly all everyday purposes, I would make free to proceed from “arbitrary” to arbitrary. If someone says to me, “I believe that the probability of finding life on Mars is 6.203 * 10-23 to four significant digits,” I would make free to respond, “That sounds like a rather arbitrary number,” not “My brain has attached the subjective arbitrariness-label to its representation of the number in your belief.”
So as it turned out in this case, having answered the question “What is ‘arbitrary’?” turns out not to affect the way I use the word ‘arbitrary’; I am just more aware of what the arbitrariness-sensation indicates. I am aware that when I say, “6.203 * 10-23 sounds like an arbitrary number”, I am indicating that I would expect some justification for assigning that particular number, and I haven’t heard it. This also explains why the precision is important—why I would question that particular number, but not someone saying “Less than 1%”. In the latter case, I have some idea what might justify such a statement; but giving a very precise figure implies that you have some kind of information I don’t know about, either that or you’re being silly.
“Ah,” you say, “but what do you mean by ‘justification’? Haven’t you failed to make any progress, and just passed the recursive buck to another black box?”
Actually, no; I told you that “arbitrariness” was a sensation produced by the absence of an expected X. Even if I don’t tell you anything more about that X, you’ve learned something about the cognitive algorithm—opened up the original black box, and taken out two gears and a smaller black box.
But yes, it makes sense to continue onward to discuss this mysterious notion of “justification”.
Suppose I told you that “justification” is what tells you whether a belief is reasonable. Would this tell you anything? No, because there are no extra gears that have been factored out, just a direct invocation of “reasonable”-ness.
Okay, then suppose instead I tell you, “Your mind labels X as a justification for Y, whenever adding ‘X’ to the pool of cognitive content would result in ‘Y’ being added to the pool, or increasing the intensity associated with ‘Y’.” How about that?
“Enough of this buck-passing tomfoolery!” you may be tempted to cry. But wait; this really does factor out another couple of gears. We have the idea that different propositions, to the extent they are held, can create each other in the mind, or increase the felt level of intensity—credence for beliefs, desire for acts or goals. You may have already known this, more or less, but stating it aloud is still progress.
This may not provide much satisfaction to someone inquiring into morals. But then someone inquiring into morals may well do better to just think moral thoughts, rather than thinking about metaethics or reductionism.
On the other hand, if you were building a Friendly AI, and trying to explain to that FAI what a human being means by the term “justification”, then the statement I just issued might help the FAI narrow it down. With some additional guidance, the FAI might be able to figure out where to look, in an empirical model of a human, for representations of the sort of specific moral content that a human inquirer-into-morals would be interested in—what specifically counts or doesn’t count as a justification, in the eyes of that human. And this being the case, you might not have to explain the specifics exactly correctly at system boot time; the FAI knows how to find out the rest on its own. My inquiries into metaethics are not directed toward the same purposes as those of standard philosophy.
Now of course you may reply, “Then the FAI finds out what the human thinks is a “justification”. But is that formulation of ‘justification’, really justified?” But by this time, I hope, you can predict my answer to that sort of question, whether or not you agree. I answer that we have just witnessed a strange loop through the meta-level, in which you use justification-as-justification to evaluate the quoted form of justification-as-cognitive-algorithm, which algorithm may, perhaps, happen to be your own, &c. And that the feeling of “justification” cannot be coherently detached from the specific algorithm we use to decide justification in particular cases; that there is no pure empty essence of justification that will persuade any optimization process regardless of its algorithm, &c.
And the upshot is that differently structured minds may well label different propositions with their analogues of the internal label “arbitrary”—though only one of these labels is what you mean when you say “arbitrary”, so you and these other agents do not really have a disagreement.
Part of The Metaethics Sequence
Next post: “Is Fairness Arbitrary?”
Previous post: “Abstracted Idealized Dynamics”
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 59 points) (
- Abstracted Idealized Dynamics by 12 Aug 2008 1:00 UTC; 37 points) (
- The Bedrock of Morality: Arbitrary? by 14 Aug 2008 22:00 UTC; 25 points) (
- Is Fairness Arbitrary? by 14 Aug 2008 1:54 UTC; 9 points) (
- [SEQ RERUN] “Arbitrary” by 29 Jul 2012 6:19 UTC; 6 points) (
- 27 Sep 2021 3:33 UTC; 4 points) 's comment on Petrov Day 2021: Mutually Assured Destruction? by (
- 22 May 2011 14:16 UTC; 4 points) 's comment on Rationalists don’t care about the future by (
- 13 Dec 2023 16:03 UTC; 2 points) 's comment on What is the next level of rationality? by (
- 11 Oct 2011 21:54 UTC; 2 points) 's comment on Improving My Writing Style by (
- 10 Oct 2010 23:28 UTC; 2 points) 's comment on Strategies for dealing with emotional nihilism by (
- 27 Dec 2009 14:13 UTC; 2 points) 's comment on Scaling Evidence and Faith by (
- 28 May 2009 7:46 UTC; 0 points) 's comment on Dissenting Views by (
- 21 Aug 2010 7:37 UTC; 0 points) 's comment on The Importance of Self-Doubt by (
- 26 Jan 2009 18:57 UTC; 0 points) 's comment on The Fun Theory Sequence by (
- 2 Jun 2009 23:28 UTC; 0 points) 's comment on Bioconservative and biomoderate singularitarian positions by (
- 20 Aug 2008 8:29 UTC; 0 points) 's comment on You Provably Can’t Trust Yourself by (
- 30 Jul 2012 17:19 UTC; -1 points) 's comment on [SEQ RERUN] Moral Error and Moral Disagreement by (
A related sense of “arbitrary”, which is common in math and CS, is “could be anything, and will probably be chosen specifically to annoy you”.
wikipedia on nets:
I came up with, a decision or belief is arbitrary if it is not caused by the factors that would be expected to cause that sort of decision or belief. This reduction has the nice quality that it also explains arbitrary variable choices in mathematics—for example if you are trying to show that your compression algorithm gets good results on arbitrary data (heh), then it is data that was not, as might be otherwise expected, chosen to play well with your compression algorithm.
“Friendly AI”? What’s one of those then? Friendly—to whom?
Or, in other words, arbitrary statement is one you won’t accept as (an influence on) your own belief, one for which you can’t trace the causal history back to its referent, given what you currently know. If you see a documentary about anthropomorphic aliens on TV, it is a fact about documentary-making process, not about aliens; the message of this documentary can’t be dereferenced.
It would really rock if you could show the context in which someone used the word “arbitrary” but in a way that just passed the recursive buck.
Here’s where I would use it:
[After I ask someone a series of questions about whether certain actions would be immoral]
Me: Now you’re just being arbitrary! Eliezer Yudkowsky: Taboo “arbitrary”! Me: Okay, he’s deciding what’s immoral based on whim. Eliezer Yudkowsky: Taboo “whim”! Me: Okay, his procedures for deciding what’s immoral can’t be articulated with finite words to a stranger such that he, and the stranger using his morality articulation, yield the same answers to all morality questions.
I’ll send my salary requirements if you want. ;-)
Like Larry, I’m more used to hearing the word mean something like “It could be otherwise without making a difference to the point I’m trying to get across”.
I stopped to answer your definitional questions while reading and defined “arbitrary” as “some variable in a system of justifications where the variable could be anything and be equally justified regardless of what it is” and “justification” as “the belief that the action that is justified will directly on indirectly further the cause of the utility function in the terms of which it is defined and does it more effectively than any other action; for beliefs, the belief that the belief that is justified will reflect the territory in the most accurate way possible (I hope I’m not passing the buck here)”
I’d say that’s close, but too specific. A better definition would be “there are no rational reasons to select this position rather than any of the alternatives” or “there are no rational restrictions on the possibility space from which the selection is chosen randomly”.
An arbitrary decision is one not founded in objective, shared principles and rules of logic that permit derivations to be made from them.
How about ‘content that we expect to come with attached justifications, and those justifications are not present in the mind of the person putting forth the content.’
“Let’s treat my car as a point mass of seven grams.”
You may suspect that 7 is an arbitrary number having nothing to do with my car. If I said 683kg then you might think it is near the actual mass of my car. In neither case do you have justification. If I tell you I don’t have a car, then you KNOW it’s arbitrary, because it’s a number I made up. Your label of arbitrariness depends on where you think I got the number.
When you look at the large car, the number is plausible, so you need less evidence (minimal justification) to believe (or at least fail to challenge) the number. With the small number, extraordinary evidence would be needed (quite a lot of justification), so the number appears more arbitrary.
As for “morals”, I like Theodore Sturgeon’s definition from “More Than Human”:
Morals are society’s rules for individual survival. Ethics are the individual’s rules for society’s survival.
Then there is Heinlein’s definition of “love”:
Love is when the happiness of another is essential for your own.
http://en.wikipedia.org/wiki/List_of_cognitive_biases
thought someone might be interested.
(I’m catching up, so that’s why this is posted so far after the original.)
When I attempted this exercise I tried to think of how I use the word “arbitrary” and came up with a definition along the lines of “Something is arbitrary if its choice from a set makes no difference to the veracity of a particular statement”, i.e. arbitrary is a 2-part function, taking as input a choice and a statement, since without a statement to evaluate against calling something arbitrary to me just looks like membership.
But then I read on and realized that I was being too narrow in what I considered to be arbitrary. Perhaps from too much mathematical training, I didn’t even think of the common use as described above. This is an subtle kind of error to watch out for: taking a technical term that happens to have the same spelling and pronunciation as a non-technical term and trying to apply the definition of the technical term back to the non-technical term. The effect is either that you confuse other people because you use a technical term that looks like a non-technical one or you confuse yourself by misunderstanding what people mean when they use the term in a non-technical sense. This sort of thing becomes a bigger problem, I reckon, as you become more and more specialized in a field with lots of technical language.
I notice that notions like “arbitrary” and “justified” tend to pay close attention to context, and that it would be easy to confuse the contexts or get indignant when someone uses the idea of justification in a larger or different context than the one that you thought was the one at issue. I can think a Green Babyeater has a less arbitrary and more justified position than a Blue Babyeater in Babyeater politics and not think my use of arbitrariness and justification to analyze that situation is weird. Of course, if you thought we were talking about Green and Blue humans, this could lead to trouble. Thus...
This is true, of course, but humans seem to automatically be able to re-scale the concept of ‘arbitrary’ and ‘justified’ across larger and smaller contexts, and contexts where ‘morality’ or ‘truth’ aren’t even relevant. So long as there is some kind of pattern. (0000000000000000000001000000… is pretty arbitrary, no? Maybe the ‘1’ was an unjustified addition, or a moral error, or something...)
I should note that “arbitrary” might be somewhat reduced by trying to extract it not by thinking about the causal chain that led to a belief or value, but about what that causal chain implies about logical facts about the universe (‘pattern attractors’, like ferns growing fractally), how those logical facts would constrain counterfactual or non-local causal chains, and how that relates to the notion of an abstract idealized dynamic in contrast to our plain old causal history. Timeless validity instead of or at least in addition to causal validity.