Sorry, I am developing my ideas in the process of the discussion and I probably have amended and changed my position several times thanks to the debate with the LW community. The biggest problem is that I haven’t defined a clear set of vocabulary (because I haven’t had a clear position yet), so there is a lot of ambiguity and misunderstanding which is solely my fault.
Here is a short summary of my current positions. They may not result in a coherent system. I’m working on that.
1. Value system / morality is science
Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn’t understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics. He/she would understand the Coulomb’s law and Newton’s law of gravitation, but a physicist needs more than
these mathematical equations for the understanding of physics.
That is the difference between physicists and chemists on one side and mathematicians and string theorists (I have not the slightest idea about string theory, so regard this part as my modest attempt of humour) on the other side.
Analogously, you need to understand the value system to understand and possibly predict the actions of value agents (humans, animals, maybe AIs). Maybe the value system can be mathematicised, or not.
But it would be a scientific explanation. I agree with you.
2. Something matters to me
We all have values. You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?
You may answer that thinking of these problems makes you biologically fitter and if you don’t ask these questions, your kind will die out and those questions won’t be asked.
But when you asked the question, you did not consider your biological fitness. And if you considered your biological fitness, then why does biological fitness matters to you? There is at least one thing that matters to you (assuming you are not a p-zombie), so at least the desire, “something matters to me”, is real, as real as your knowledge of the world.
Assuming you are not a psychopath, your only desire is not your own survival, but, being empathetic, also the well-being of your fellow animals, human and sentient beings. And you know that your fellow human beings are empathetic (or acting as if they are empathetic) as well. Ergo you can establish an intersubjective consensus and some common ground what the good is.
3. Epistomology
Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes. You may read all the books on neurobiology, but you may learn more on desires by reading a single book by Nabokov. (You may think that you don’t care, then please go back to point 2.). From here continue with the text diagramm.
Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn’t understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics.
Are we assuming this hypothetical occult Pythagorean is aware of historically post-Pythagorean pure-mathematical concepts like Conway’s Game of Life?
It seems to me that electrons and gravitational forces can be fully expressed in mathematics. Since the equations describing their behavior are known, can’t the Pythagorean simply consider a mathematical object whose behavior is defined by those formulas?
I suspect that we have different definitions of “understand”, and that that is at the core of the debate. To me, understanding something is the same as being able to predict its behavior; you seem to have something additional in mind—some sort of qualia, perhaps—but I’m not sure what it is.
But it would be a scientific explanation. I agree with you.
I’m sorry, I’ve lost track of context on this one, and I can’t figure it out even on rereading. What would be a scientific explanation, of what, under what circumstances?
2.
It may interest you to know that I consider p-zombies as capable of having things that matter to them. This suggests a mismatch of definition, probably of “matter”.
You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?
I asked in order to determine whether, when you discussed whether a mind has such an understanding, whether you were making a distinction that mattered in reality, or if you were just talking about qualia. I don’t believe in qualia, you see.
you can establish an intersubjective consensus and some common ground what the good is.
This is, I think, the strongest and most interesting point you’ve put forward. It should be possible to establish a science of preference that can predict how people will choose on moral questions; once the models are sufficiently accurate, humanity should be able to formalize our definition of “good”. (Assuming, of course, that an intersubjective consensus exists. It could be that we just disagree about a bunch of stuff.)
3.
Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes.
I think there’s an important difference between sensory perception and moral perception that you’re glossing over: sensory perception is perceiving something out there that exists to be perceived, while moral perception reports only on itself. If sensory perception is a window, moral perception is a painting.
+-------+ senses +---------------------+
|reality|----------->|perception of reality|
+-------+ +---------------------+
+---------+ moral
|evolution| intuition +----------------------+
| of the |---------->|perception of morality|
| brain | +----------------------+
+---------+
An electron is not a mathematical object. If it were, then we wouldn’t need chemists and physicsts, but only mathematicians. A mathematical object does have any behaviour, as much as a word in a language does not have any behaviour.
Mathematics and logic are tautalogical systems with defined symbols and operations. We use mathematics to describe the physical world as much as we use language to describe the moral world (value system), e.g. in behavioural biology and psychology.
Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?
1b.
An electron is not a mathematical object. Let’s say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a “mathematical object” contained in (1).
But what if equation (2) is found, that better describes an electron’s behaviour? What happens with the “mathematical object”?
2.
I consider p-zombies as capable of having things that matter to them
What is your defintion of “something matters?”. As in “it matters to a stone up in the air to fall down to earth.”? In that case, our defintions vary.
You seem to be a logical positivist, which is an incomplete world view. If your mind works the same way as mine does, then you should know that qualia exists. It is like if you walk up to a tree and says “No, there is no tree in front of me.” and then sidestep it.
3.
I would in principle agree with your diagramm of moral intuition. Let me present you two models:
++++++++++++++++++++++++++++++
A.
reality—senses—perception of reality
morality—moral intuition—perception of morality
B.
evolution of the brain—senses—perception of reality
evolution of the brain—moral intuition—perception of morality
+++++++++++++++++++++++++++++
Why do you cherry-pick from the two categories? Is it because science is more mathematical, has a methodology that is more precise, and has a greater intersubjective consensus? Why does any of these make reality real and morality relative?
For more information on model 3B, look up “evolutionary epistomology”.
Application case
I want to apply my theory of absolute moralily to the design of Friendly AI.
Unless we can mathematise our value system, how can we make an AI friendly? We know Asimov’s Laws of Robotics, but these laws are in the inprecise formulation of natural language. What do “injury”, “human being”, “robot”, “harm”, “obey”, “protection” and other words mean? The outcome of such ambiquities is the defining plot element of The Metamorphosis and “I, Robot” (2004 film).
My solution:
Design AIs with empathy and access to our intersubjective consensus of morality. If our current normative theories aren’t completely wrong, then the absolute good does not require the annihilation of the human species.
You might say that having empathy does not automatically make an AI good, because it may have a wrong normative theory.
Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.
This is redundant, but the point is important and I don’t want it to be overlooked because it’s buried at the bottom of a long comment.
Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.
An electron is not a mathematical object. If it were, then we wouldn’t need chemists and physicsts, but only mathematicians.
Chemists and physicists tell us which mathematical objects we’re made out of. The used to think it was integers, but it turns out it wasn’t.
A mathematical object does have any behaviour, as much as a word in a language does not have any behaviour.
It has mathematical behavior. Words are not required to be well-defined. What distinction are you trying to make here?
Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?
No, what follows from the hypothetical is that it would be possible to hold meaningful discussions about our normative theories, rather than just saying words. A theory can be rigorously well-defined and also wrong.
An electron is not a mathematical object. Let’s say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a “mathematical object” contained in (1).
The electron seems to be the mathematical object contained in (1). We will later discover that this is wrong.
But what if equation (2) is found, that better describes an electron’s behaviour? What happens with the “mathematical object”?
(1) still “exists” (to the extent that mathematical objects “exist” independently in the Platonic World of Forms, which they don’t, but it’s a fairly useful approximation most of the time), but (1) is less useful to us now, so we don’t spend as much time talking and thinking about it.
What is your defintion of “something matters?”. As in “it matters to a stone up in the air to fall down to earth.”? In that case, our defintions vary.
It has a utility function; that is, it acts so as to optimize some variable. A rock isn’t a very clever faller; it doesn’t really optimize in any meaningful sense. For example, a rock won’t roll up a two-foot ridge in order to be able to fall two hundred feet down the cliff on the other side.
You seem to be a logical positivist, which is an incomplete world view.
If your mind works the same way as mine does, then you should know that qualia exists. It is like if you walk up to a tree and says “No, there is no tree in front of me.” and then sidestep it.
Sure, qualia exist, in the same way that a car or a computer program exists. Qualia just don’t have a separate, fundamental existence independent of the mere atomic mechanism of the neurons in the brain of the person experiencing the qualia.
After the components of a car have been assembled, you don’t need to perform a ritual blessing over it to infuse the mere mechanism of engine and drive shaft and so forth with the ineffable essence of car-ness. It’s already fully possessed of car-ness, simply by virtue of the physical mechanisms that make it up.
Likewise, I don’t need an additional element—spirit, soul, elan vital, ontologically intrinsic morality, whatever—in order to infuse my brain with qualia. It’s already fully possessed of qualia, simply by virtue of the physical mechanisms that make it up. If I survive long enough to get uploaded, I fully expect my uploaded copies to have their own qualia.
Why do you cherry-pick from the two categories?
...no.
Let my try again:
+-------+ +------+ +---------------------+
|reality|--->|senses|---->|perception of reality|
+-------+ +------+ +---------------------+
+---------+ +----------------------+
| moral |--->|perception of morality|
|intuition| +----------------------+
+---------+
Unless we can mathematise our value system, how can we make an AI friendly? We know Asimov’s Laws of Robotics, but these laws are in the inprecise formulation of natural language. What do “injury”, “human being”, “robot”, “harm”, “obey”, “protection” and other words mean? The outcome of such ambiquities is the defining plot element of The Metamorphosis and “I, Robot” (2004 film).
My solution: Design AIs with empathy and access to our intersubjective consensus of morality. If our current normative theories aren’t completely wrong, then the absolute good does not require the annihilation of the human species.
Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.
No. If you do that, everyone will die. “Good at killing other AIs” does not even remotely imply “morally good according to human values”. Morality is harder than that.
I guess we should do that then? I strongly expect that it won’t turn out that that’s the right thing to do, though, and it’s not what I had in mind when I said you’d kill everyone. I meant that the AI will care about the wrong thing, ignore human morality completely, and eat the world (killing everyone as an incidental side effect) even though it’s wrong according to human morals.
Often, when I stop to think about a decision, I find that my desire changes upon reflection. The latter desire generally seems more intellectually coherent(*), and across multiple instances, the initial desires on various occasions are generally more inconsistent with one another while the after-reflection desires are generally more consistent with one another. From this I infer the existence of a (possibly only vague, partially specified, or partially consistent) common cause to the various instances’ after-reflection desires. This common cause appears to roughly resemble a bundle of heuristics that collectively approximate some sort of optimization criteria. I call the bundle of heuristics my “moral intuition” and the criteria they approximate my “morality”.
I suspect that other human’s minds are broadly similar to mine in this respect, and that their moral intuitions are broadly similar to mine. To the extent they correlate, we might call the set of common trends “human morality” or “humaneness”.
(*) An example of intellectual coherence vs. incoherence: Right now, I’d like to go get some ice cream from the freezer. However, on reflection, I remember that there isn’t any ice cream in the freezer at the moment, so walking over to the freezer would not satisfy the impulse that motivated the action.
Edit: Well, sort of. Some of their values partially coincide with ours. But one of the major themes of the story is that we should expect aliens to have inhumane value systems.
Chemists and physicists tell us which mathematical objects we’re made out of. The used to think it was integers, but it turns out it wasn’t.
If the physical world can be fully reduced to mathematics, we don’t need chemists and physicist to tell us which mathematical objects we’re made out of. A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.
We use mathematics to describe physical objects, but physical objects are not mathematical objects. We use languages to describe physical objects, but physical objects are not words. Why are things mathematical and not lingual?
Is it because the mathematical description yields better predictions?
Theories and what theories describe
Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?
No, what follows from the hypothetical is that it would be possible to hold meaningful discussions about our normative theories, rather than just saying words. A theory can be rigorously well-defined and also wrong.
I think you are missing the point. A physical theory can be wrong, that, I guess, does not shake your belief in an absolute reality. A normative theory, even mathematised, can also be wrong, but why should it shake my belief in an absolute morality?
Reductionism
Not quite, but a near miss. I’m a reductionist.
I am fine with that. As long as you believe in qualia as you believe in cars and trees, then we have a base from which we can work on, without bothering the fundaments too much. I think reductionism is wrong, but that’s not the discussion here.
From a reductionist point of view, the absolute morality would be a part of the absolute reality, with the mere difference that values have different qualities (no spatial extension, for example) than cars and trees.
| morality | --> | moral intuition | ---> | perception of morality |
or
| senses | ----> | perception of reality |
| moral intuition | ---> | perception of morality |
Again, why is one model better than the other one?
We will die anyway.
No. If you do that, everyone will die. “Good at killing other AIs” does not even remotely imply “morally good according to human values”. Morality is harder than that.
It is not necessarily that evolution gets us better physical theories or normative theories. I was simply optimistic. It is possible that people believing in a spaghetti monster kill all rational people, as much as it is possible that an AI has a wrong normative theory and thus kill all human beings. Or, the absolute morality demands our death. Or maybe the LHC will create a black hole that kills us within 24 hours. In all cases, bad luck. We will die anyway. On the longer run, the chance of us irreversibly dying at any single point of time is greater than us living forever.
Concerning Coherent Extrapolated Volition
I would probably have saved a lot of discussion, had I read the article first (and learned of the rationalist taboo). :)
I think what Eliezer calls “coherent extrapolated volition” is what I call “absolute morality. The “ability to extrapolate volition” is what I call “empathy”. I don’t agree with his goal “initial dynamic should implement the coherent extrapolated volition of humankind” , though. First, what is the defintion of humankind? This is a core problem for the Prime Intellect in The Metamorphosis.
I think, the goal of the intial dynamic should be ” to extrapolate volition of all entities that have or can express volitions.”
If the physical world can be fully reduced to mathematics, we don’t need chemists and physicist to tell us which mathematical objects we’re made out of. A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.
Mathematics is a broad field, with many specialties. A mathematician could only know which mathematical objects correspond to electrons if they studied that particular question. And our name for a mathematician who specializes in studying the question of which mathematical objects correspond to electrons is… Particle Physicist.
A physical theory can be wrong, that, I guess, does not shake your belief in an absolute reality. A normative theory, even mathematised, can also be wrong, but why should it shake my belief in an absolute morality?
It shouldn’t, because this is a straw man, not the argument that leads us to conclude that there isn’t a single absolute morality.
If you read a physics or chemistry textbook, then you’ll find a lot of words and only a few equations, whereas a mathematics textbook has much more equations and the words in the book are to explain the equations, whereas the words in a physics book are not only explaining the equations but the issues that the equations are explaining.
However, I haven’t fully thought about reductionism, so do you have any recommendations that I want to read?
My current two objections:
1. Computational
According to our current physical theories, it is impossible to predict the behaviour of any system larger than a dozen atoms, see Walter Kohn’s Nobel Lecture. We could eventually have a completely new theory, but that would be an optimistic hope.
2. Ontological
Physical objects have other qualities than mathematical objects. And values have other qualities than physical objects. Further elaboration needed.
It shouldn’t, because this is a straw man, not the argument that leads us to conclude that there isn’t a single absolute morality.
It is not a straw man, because I am not attacking any position. I think I was misunderstood, as I said.
I strongly disagree with the Less Wrong consensus on an issue. Is it okay to write a top-level article about it?
Absolutely! Just make sure you know why it’s the consensus position, first. Before posting, read what has already been written on the subject to ensure that you are saying something new and not just retracing covered ground. If you aren’t sure why the consensus position is the consensus position, feel free to ask in an open thread. Being aware of what has been said about a subject in the past is especially important if you want to argue for the existence of God, claim a universally compelling morality, or suggest a really easy way to make friendly AI without going through all that complicated coherent extrapolated volition stuff. Before tackling the Less Wrong consensus on these issues you may want to first acquire an extraordinary familiarity with the sequences, the arguments against your position, and the Less Wrong norms on the issue.
(Emphasis mine.)
You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you’re discussing.
.
.
A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.
Choice of axioms?
Again, why is one model better than the other one?
I can run controlled experiments to show that my perception of reality and your perception of reality have a common cause. I can close a box, and we will both report seeing it change state from open to closed. There is no such evidence of a common thing-that-morality-intuition-observes. If we imagine our minds as rooms, our reality-senses are windows overlooking a common garden; we can see each other’s windows, and confirm that we see the same trees and flowers. But our morality-senses need not be true windows; for all we know, they might as well be tromp l’oeil.
/---> my senses ---> my perception of reality ---\
reality --->| |---> consensus
\--->your senses--->your perception of reality---/
/--- my morals ---\
evolution --->| |---> weak consensus
\---your morals---/
I was simply optimistic.
Optimism and pessimism are incompatible with realism. If you’re not willing to believe that the universe works the way that it does in fact work, then you’re not qualified to work on potentially-world-destroying projects.
I think what Eliezer calls “coherent extrapolated volition” is what I call “absolute morality”.
And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?
(Incidentally, if you’ve been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)
You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you’re discussing.
I don’t think there is an easy way to make FAI.
Absolute morality is the coherent extrapolated volition of all entities with volition.
Morality is based on values. In a universe where there are only insentient stones, there is no morality, and even if there are, they are meaningless. Morality exists only where there are values (things that we either like or dislike), or “volition”.
Reality and Morality
So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?
Optimism and pessimism are incompatible with realism. If you’re not willing to believe that the universe works the way that it does in fact work, then you’re not qualified to work on potentially-world-destroying projects.
I don’t see what optimism or pessimism has to do with willingness to believe in an absolute reality. I only know that my knowledge is restricted, and within the boundaries of my ignorance, I can hope for the better or believe in the worse. If I’m omniscient, I will neither be optimistic or pessimistic. We are optimistic because we are ignorant, not the other way around, at least in my case.
And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?
To be absolute, it has to apply to all mind that has volition.
(Incidentally, if you’ve been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)
That is why I evaluate arguments based on other things than someone’s ulterior motives.
Absolute morality is the coherent extrapolated volition of all entities with volition.
This sounds like a definition, so let’s gensym it and see if it still makes sense.
G695 is the coherent extrapolated volition of all entities with volition.
Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?
So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?
No, that’s my reason for breaking symmetry between them, for discarding the assumption that the explanation of the two phenomena should be essentially isomorphic. I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.
within the boundaries of my ignorance, I can hope for the better or believe in the worse.
There is a very great difference between hoping for the better and believing in the better. Nor are “better” or “worse” the only two options.
Suppose you’re getting into a car, and you’re wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.
To be absolute, it has to apply to all mind that has volition.
You’re constructing a universal CEV. It’s not an already-existing ontologically fundamental entity. It’s not a thing that actually exists.
That is why I evaluate arguments based on other things than someone’s ulterior motives.
Consciously, sure. I just wanted to warn you against the human credulity bias.
Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.
Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are “sane”.
Suppose you’re getting into a car, and you’re wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
You’re constructing a universal CEV. It’s not an already-existing ontologically fundamental entity. It’s not a thing that actually exists.
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
No, I care about things. It’s just that I don’t think that G695 (assuming it’s defined—see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.
Edit: oh, sorry, forgot to address your actual point.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says “The chances of a crash are low.” The G699 view says “The chances of a crash are high.” The G700 view says “The chances of a crash are 1/1000000.” I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
Does CEV of humankind exists?
I personally don’t think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.
I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
What if you can’t predict?
I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Since I’m Pavitra, it doesn’t really matter to me if G101 has a point; I care about it anyway.
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
Not natively, no. That’s why it requires advocacy.
You care about things. I assume you care about your health. In that case, you don’t want to be in a crash. So you’ll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.
Again, why is important to advocate anything? -- Because you care about it. -- So what?
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
Sure, and those are the claims I take the time to evaluate and debunk.
If you get into the car, you are a G701, if not, you are a G702.
Please explain the relationship between G701-702 and G698-700.
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
And you believe that other minds have different core believs?
Sure, and those are the claims I take the time to evaluate and debunk.
I think we should close the discussion and take some time thinking.
Please explain the relationship between G701-702 and G698-700.
“chance is low” or “chance is high” are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And “surviving is good” is not descriptive, it is normative because good is a value. you can also say instead: “you should survive”, which is a normative rule.
And you believe that other minds have different core believs?
“Belief” isn’t quite right; it’s not an anticipation of how the world will turn out, but a preference of how the world will turn out. But yes, I anticipate that other minds will have different core preferences.
I think we should close the discussion and take some time thinking.
And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?
To be absolute, it has to apply to all mind that has volition.
No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.
I think it is insufficiently detailed to identify a unique utility function—it needs to have specific extrapolation and reconciliation procedures filled in, the details of those procedures are important and affect the result, and a bad extrapolation procedure could produce arbitrary results.
That said, programming an AI with any value system that didn’t match the template of CEV (plus details) would be a profoundly stupid act. I have seen so many disastrously buggy attempts to define what human values are that I doubt it could be done correctly without the aid of a superintelligence.
No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.
There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.
I also have to correct my position: CEV is not absolute morality. Volition is rather a “reptor” or “sensor” of morality I made a conceptual mistake.
Can you formulate your thoughts value-free, that is without words like “profoundly stupid”, “important”. Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.
ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.
An electron is not a mathematical object. Let’s say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a “mathematical object” contained in (1).
But what if equation (2) is found, that better describes an electron’s behaviour? What happens with the “mathematical object”?
So you incorrectly identified what sort of mathematical object it is. That doesn’t mean it isn’t one, just that you made an identification prematurely (and perhaps were insufficiently careful with your language); you’ll need to split off the concepts of actual-but-still-unknown-electron and previously-hypothesized-electron.
In that sense, everything could be a mathematical object, including qualia. We just haven’t identified it.
Also, the concept of actual-but-still-unknown-X and previously-hypothesized-X can be applied to morality in terms of actual-but-still-unknown-norm and previously-hypothesized-norm.
Sorry, I am developing my ideas in the process of the discussion and I probably have amended and changed my position several times thanks to the debate with the LW community. The biggest problem is that I haven’t defined a clear set of vocabulary (because I haven’t had a clear position yet), so there is a lot of ambiguity and misunderstanding which is solely my fault.
Here is a short summary of my current positions. They may not result in a coherent system. I’m working on that.
1. Value system / morality is science
Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn’t understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics. He/she would understand the Coulomb’s law and Newton’s law of gravitation, but a physicist needs more than these mathematical equations for the understanding of physics.
That is the difference between physicists and chemists on one side and mathematicians and string theorists (I have not the slightest idea about string theory, so regard this part as my modest attempt of humour) on the other side.
Analogously, you need to understand the value system to understand and possibly predict the actions of value agents (humans, animals, maybe AIs). Maybe the value system can be mathematicised, or not.
But it would be a scientific explanation. I agree with you.
2. Something matters to me
We all have values. You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?
You may answer that thinking of these problems makes you biologically fitter and if you don’t ask these questions, your kind will die out and those questions won’t be asked.
But when you asked the question, you did not consider your biological fitness. And if you considered your biological fitness, then why does biological fitness matters to you? There is at least one thing that matters to you (assuming you are not a p-zombie), so at least the desire, “something matters to me”, is real, as real as your knowledge of the world.
Assuming you are not a psychopath, your only desire is not your own survival, but, being empathetic, also the well-being of your fellow animals, human and sentient beings. And you know that your fellow human beings are empathetic (or acting as if they are empathetic) as well. Ergo you can establish an intersubjective consensus and some common ground what the good is.
3. Epistomology
Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes. You may read all the books on neurobiology, but you may learn more on desires by reading a single book by Nabokov. (You may think that you don’t care, then please go back to point 2.). From here continue with the text diagramm.
ps The computer that you need to model the quantum states of a brain would be bigger than the universe, see (Kohn’s Nobel Lecture)[http://nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf].
1.
Are we assuming this hypothetical occult Pythagorean is aware of historically post-Pythagorean pure-mathematical concepts like Conway’s Game of Life?
It seems to me that electrons and gravitational forces can be fully expressed in mathematics. Since the equations describing their behavior are known, can’t the Pythagorean simply consider a mathematical object whose behavior is defined by those formulas?
I suspect that we have different definitions of “understand”, and that that is at the core of the debate. To me, understanding something is the same as being able to predict its behavior; you seem to have something additional in mind—some sort of qualia, perhaps—but I’m not sure what it is.
I’m sorry, I’ve lost track of context on this one, and I can’t figure it out even on rereading. What would be a scientific explanation, of what, under what circumstances?
2.
It may interest you to know that I consider p-zombies as capable of having things that matter to them. This suggests a mismatch of definition, probably of “matter”.
I asked in order to determine whether, when you discussed whether a mind has such an understanding, whether you were making a distinction that mattered in reality, or if you were just talking about qualia. I don’t believe in qualia, you see.
This is, I think, the strongest and most interesting point you’ve put forward. It should be possible to establish a science of preference that can predict how people will choose on moral questions; once the models are sufficiently accurate, humanity should be able to formalize our definition of “good”. (Assuming, of course, that an intersubjective consensus exists. It could be that we just disagree about a bunch of stuff.)
3.
No they’re not.
Agreed.
I think there’s an important difference between sensory perception and moral perception that you’re glossing over: sensory perception is perceiving something out there that exists to be perceived, while moral perception reports only on itself. If sensory perception is a window, moral perception is a painting.
1a.
An electron is not a mathematical object. If it were, then we wouldn’t need chemists and physicsts, but only mathematicians. A mathematical object does have any behaviour, as much as a word in a language does not have any behaviour.
Mathematics and logic are tautalogical systems with defined symbols and operations. We use mathematics to describe the physical world as much as we use language to describe the moral world (value system), e.g. in behavioural biology and psychology.
Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?
1b.
An electron is not a mathematical object. Let’s say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a “mathematical object” contained in (1).
But what if equation (2) is found, that better describes an electron’s behaviour? What happens with the “mathematical object”?
2.
What is your defintion of “something matters?”. As in “it matters to a stone up in the air to fall down to earth.”? In that case, our defintions vary.
You seem to be a logical positivist, which is an incomplete world view. If your mind works the same way as mine does, then you should know that qualia exists. It is like if you walk up to a tree and says “No, there is no tree in front of me.” and then sidestep it.
3.
I would in principle agree with your diagramm of moral intuition. Let me present you two models:
++++++++++++++++++++++++++++++
A. reality—senses—perception of reality
morality—moral intuition—perception of morality
B. evolution of the brain—senses—perception of reality
evolution of the brain—moral intuition—perception of morality
+++++++++++++++++++++++++++++
Why do you cherry-pick from the two categories? Is it because science is more mathematical, has a methodology that is more precise, and has a greater intersubjective consensus? Why does any of these make reality real and morality relative?
For more information on model 3B, look up “evolutionary epistomology”.
Application case
I want to apply my theory of absolute moralily to the design of Friendly AI.
Unless we can mathematise our value system, how can we make an AI friendly? We know Asimov’s Laws of Robotics, but these laws are in the inprecise formulation of natural language. What do “injury”, “human being”, “robot”, “harm”, “obey”, “protection” and other words mean? The outcome of such ambiquities is the defining plot element of The Metamorphosis and “I, Robot” (2004 film).
My solution: Design AIs with empathy and access to our intersubjective consensus of morality. If our current normative theories aren’t completely wrong, then the absolute good does not require the annihilation of the human species.
You might say that having empathy does not automatically make an AI good, because it may have a wrong normative theory.
Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.
This is redundant, but the point is important and I don’t want it to be overlooked because it’s buried at the bottom of a long comment.
If you do that, everyone will die.
Chemists and physicists tell us which mathematical objects we’re made out of. The used to think it was integers, but it turns out it wasn’t.
It has mathematical behavior. Words are not required to be well-defined. What distinction are you trying to make here?
No, what follows from the hypothetical is that it would be possible to hold meaningful discussions about our normative theories, rather than just saying words. A theory can be rigorously well-defined and also wrong.
The electron seems to be the mathematical object contained in (1). We will later discover that this is wrong.
(1) still “exists” (to the extent that mathematical objects “exist” independently in the Platonic World of Forms, which they don’t, but it’s a fairly useful approximation most of the time), but (1) is less useful to us now, so we don’t spend as much time talking and thinking about it.
It has a utility function; that is, it acts so as to optimize some variable. A rock isn’t a very clever faller; it doesn’t really optimize in any meaningful sense. For example, a rock won’t roll up a two-foot ridge in order to be able to fall two hundred feet down the cliff on the other side.
Not quite, but a near miss. I’m a reductionist.
Sure, qualia exist, in the same way that a car or a computer program exists. Qualia just don’t have a separate, fundamental existence independent of the mere atomic mechanism of the neurons in the brain of the person experiencing the qualia.
After the components of a car have been assembled, you don’t need to perform a ritual blessing over it to infuse the mere mechanism of engine and drive shaft and so forth with the ineffable essence of car-ness. It’s already fully possessed of car-ness, simply by virtue of the physical mechanisms that make it up.
Likewise, I don’t need an additional element—spirit, soul, elan vital, ontologically intrinsic morality, whatever—in order to infuse my brain with qualia. It’s already fully possessed of qualia, simply by virtue of the physical mechanisms that make it up. If I survive long enough to get uploaded, I fully expect my uploaded copies to have their own qualia.
...no.
Let my try again:
These are good and important questions. The correct answer is almost certainly “we’d better mathematize our value system, and we’d better get it right.”
Have you read “Coherent Extrapolated Volition”?
No. If you do that, everyone will die. “Good at killing other AIs” does not even remotely imply “morally good according to human values”. Morality is harder than that.
What if the coherent extrapolated volition is the death of all people, that is, the end of all volitions?
I guess we should do that then? I strongly expect that it won’t turn out that that’s the right thing to do, though, and it’s not what I had in mind when I said you’d kill everyone. I meant that the AI will care about the wrong thing, ignore human morality completely, and eat the world (killing everyone as an incidental side effect) even though it’s wrong according to human morals.
When I use the word morality, then I certainly don’t mean any rules of conduct.
What is your defintion of human morality?
Often, when I stop to think about a decision, I find that my desire changes upon reflection. The latter desire generally seems more intellectually coherent(*), and across multiple instances, the initial desires on various occasions are generally more inconsistent with one another while the after-reflection desires are generally more consistent with one another. From this I infer the existence of a (possibly only vague, partially specified, or partially consistent) common cause to the various instances’ after-reflection desires. This common cause appears to roughly resemble a bundle of heuristics that collectively approximate some sort of optimization criteria. I call the bundle of heuristics my “moral intuition” and the criteria they approximate my “morality”.
I suspect that other human’s minds are broadly similar to mine in this respect, and that their moral intuitions are broadly similar to mine. To the extent they correlate, we might call the set of common trends “human morality” or “humaneness”.
(*) An example of intellectual coherence vs. incoherence: Right now, I’d like to go get some ice cream from the freezer. However, on reflection, I remember that there isn’t any ice cream in the freezer at the moment, so walking over to the freezer would not satisfy the impulse that motivated the action.
What about the Baby-Eaters and the Super Happy People in the story Three Worlds Collide? Do they have anything you would call “humaneness”?
No.
Edit: Well, sort of. Some of their values partially coincide with ours. But one of the major themes of the story is that we should expect aliens to have inhumane value systems.
Physical and mathematical objects
If the physical world can be fully reduced to mathematics, we don’t need chemists and physicist to tell us which mathematical objects we’re made out of. A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.
We use mathematics to describe physical objects, but physical objects are not mathematical objects. We use languages to describe physical objects, but physical objects are not words. Why are things mathematical and not lingual? Is it because the mathematical description yields better predictions?
Theories and what theories describe
I think you are missing the point. A physical theory can be wrong, that, I guess, does not shake your belief in an absolute reality. A normative theory, even mathematised, can also be wrong, but why should it shake my belief in an absolute morality?
Reductionism
I am fine with that. As long as you believe in qualia as you believe in cars and trees, then we have a base from which we can work on, without bothering the fundaments too much. I think reductionism is wrong, but that’s not the discussion here.
From a reductionist point of view, the absolute morality would be a part of the absolute reality, with the mere difference that values have different qualities (no spatial extension, for example) than cars and trees.
Two models
Let me try again:
| reality | ---> | senses | ----> | perception of reality |
| morality | --> | moral intuition | ---> | perception of morality |
or
| senses | ----> | perception of reality |
| moral intuition | ---> | perception of morality |
Again, why is one model better than the other one?
We will die anyway.
It is not necessarily that evolution gets us better physical theories or normative theories. I was simply optimistic. It is possible that people believing in a spaghetti monster kill all rational people, as much as it is possible that an AI has a wrong normative theory and thus kill all human beings. Or, the absolute morality demands our death. Or maybe the LHC will create a black hole that kills us within 24 hours. In all cases, bad luck. We will die anyway. On the longer run, the chance of us irreversibly dying at any single point of time is greater than us living forever.
Concerning Coherent Extrapolated Volition
I would probably have saved a lot of discussion, had I read the article first (and learned of the rationalist taboo). :)
I think what Eliezer calls “coherent extrapolated volition” is what I call “absolute morality. The “ability to extrapolate volition” is what I call “empathy”. I don’t agree with his goal “initial dynamic should implement the coherent extrapolated volition of humankind” , though. First, what is the defintion of humankind? This is a core problem for the Prime Intellect in The Metamorphosis.
I think, the goal of the intial dynamic should be ” to extrapolate volition of all entities that have or can express volitions.”
Mathematics is a broad field, with many specialties. A mathematician could only know which mathematical objects correspond to electrons if they studied that particular question. And our name for a mathematician who specializes in studying the question of which mathematical objects correspond to electrons is… Particle Physicist.
It shouldn’t, because this is a straw man, not the argument that leads us to conclude that there isn’t a single absolute morality.
If you read a physics or chemistry textbook, then you’ll find a lot of words and only a few equations, whereas a mathematics textbook has much more equations and the words in the book are to explain the equations, whereas the words in a physics book are not only explaining the equations but the issues that the equations are explaining.
However, I haven’t fully thought about reductionism, so do you have any recommendations that I want to read?
My current two objections:
1. Computational
According to our current physical theories, it is impossible to predict the behaviour of any system larger than a dozen atoms, see Walter Kohn’s Nobel Lecture. We could eventually have a completely new theory, but that would be an optimistic hope.
2. Ontological
Physical objects have other qualities than mathematical objects. And values have other qualities than physical objects. Further elaboration needed.
It is not a straw man, because I am not attacking any position. I think I was misunderstood, as I said.
I usually try not to say this, but...
From the FAQ:
(Emphasis mine.)
You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you’re discussing.
.
.
Choice of axioms?
I can run controlled experiments to show that my perception of reality and your perception of reality have a common cause. I can close a box, and we will both report seeing it change state from open to closed. There is no such evidence of a common thing-that-morality-intuition-observes. If we imagine our minds as rooms, our reality-senses are windows overlooking a common garden; we can see each other’s windows, and confirm that we see the same trees and flowers. But our morality-senses need not be true windows; for all we know, they might as well be tromp l’oeil.
Optimism and pessimism are incompatible with realism. If you’re not willing to believe that the universe works the way that it does in fact work, then you’re not qualified to work on potentially-world-destroying projects.
And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?
(Incidentally, if you’ve been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)
Universal morality
I don’t think there is an easy way to make FAI.
Absolute morality is the coherent extrapolated volition of all entities with volition. Morality is based on values. In a universe where there are only insentient stones, there is no morality, and even if there are, they are meaningless. Morality exists only where there are values (things that we either like or dislike), or “volition”.
Reality and Morality
So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?
I don’t see what optimism or pessimism has to do with willingness to believe in an absolute reality. I only know that my knowledge is restricted, and within the boundaries of my ignorance, I can hope for the better or believe in the worse. If I’m omniscient, I will neither be optimistic or pessimistic. We are optimistic because we are ignorant, not the other way around, at least in my case.
To be absolute, it has to apply to all mind that has volition.
That is why I evaluate arguments based on other things than someone’s ulterior motives.
This sounds like a definition, so let’s gensym it and see if it still makes sense.
Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?
No, that’s my reason for breaking symmetry between them, for discarding the assumption that the explanation of the two phenomena should be essentially isomorphic. I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.
There is a very great difference between hoping for the better and believing in the better. Nor are “better” or “worse” the only two options.
Suppose you’re getting into a car, and you’re wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.
You’re constructing a universal CEV. It’s not an already-existing ontologically fundamental entity. It’s not a thing that actually exists.
Consciously, sure. I just wanted to warn you against the human credulity bias.
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are “sane”.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
Does CEV of humankind exists?
No, I care about things. It’s just that I don’t think that G695 (assuming it’s defined—see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.
Certainly not—hence “eventually”. Science requires interpreting data.
Edit: oh, sorry, forgot to address your actual point.
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says “The chances of a crash are low.” The G699 view says “The chances of a crash are high.” The G700 view says “The chances of a crash are 1/1000000.” I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
I personally don’t think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.
I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.
I like gensyms.
G101: Pavitra (me) cares about something.
What is the point in caring for G101?
What if you can’t predict?
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Since I’m Pavitra, it doesn’t really matter to me if G101 has a point; I care about it anyway.
Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.
Not natively, no. That’s why it requires advocacy.
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
You care about things. I assume you care about your health. In that case, you don’t want to be in a crash. So you’ll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.
Again, why is important to advocate anything? -- Because you care about it. -- So what?
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
Sure, and those are the claims I take the time to evaluate and debunk.
Please explain the relationship between G701-702 and G698-700.
And you believe that other minds have different core believs?
I think we should close the discussion and take some time thinking.
“chance is low” or “chance is high” are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And “surviving is good” is not descriptive, it is normative because good is a value. you can also say instead: “you should survive”, which is a normative rule.
“Belief” isn’t quite right; it’s not an anticipation of how the world will turn out, but a preference of how the world will turn out. But yes, I anticipate that other minds will have different core preferences.
Yes, okay.
No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.
What do you think of Eliezer’s idea of “coherent extrapolated volition of humankind” and his position that FAI should optimise it?
I think it is insufficiently detailed to identify a unique utility function—it needs to have specific extrapolation and reconciliation procedures filled in, the details of those procedures are important and affect the result, and a bad extrapolation procedure could produce arbitrary results.
That said, programming an AI with any value system that didn’t match the template of CEV (plus details) would be a profoundly stupid act. I have seen so many disastrously buggy attempts to define what human values are that I doubt it could be done correctly without the aid of a superintelligence.
There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.
I also have to correct my position: CEV is not absolute morality. Volition is rather a “reptor” or “sensor” of morality I made a conceptual mistake.
Can you formulate your thoughts value-free, that is without words like “profoundly stupid”, “important”. Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.
ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.
You keep using that phrase, “intersubjective consensus”. What does it mean, and how do you know that there is one with respect to morality?
So you incorrectly identified what sort of mathematical object it is. That doesn’t mean it isn’t one, just that you made an identification prematurely (and perhaps were insufficiently careful with your language); you’ll need to split off the concepts of actual-but-still-unknown-electron and previously-hypothesized-electron.
In that sense, everything could be a mathematical object, including qualia. We just haven’t identified it.
Also, the concept of actual-but-still-unknown-X and previously-hypothesized-X can be applied to morality in terms of actual-but-still-unknown-norm and previously-hypothesized-norm.