But it can never, on its own, inject any newOught into the system. And when it opposes some pre-existing Ought,it only ever does so on behalf of some other pre-existing Ought.
No. Yudkowski is a moral fictionalist, but he has never (to my knowledge) ever justified his position. Granted I haven’t read his whole corpus of work, but from what I’ve seen he just takes it as a given.
“Why would any supermind want something so inherentlyworthless as the feeling of discovery without any real discoveries?”
“No free lunch. You want a wonderful and mysterious universe? That’s your value.”
“These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.”
”Touch too hard in the wrong dimension, and the physical representation of those values will shatter—and not come back, for there will be nothing left to want to bring it back.”
I’ve chosen a small representation of the sort of things that Eliezer says about human values. When I call Eliezer a moral fictionalist, I don’t mean that he doesn’t think human values are real, just that they are real in the way that fictional stories are real, ie. that they exist only in human minds, and are not in any way objective or discoverable.
Human values are, in Eliezer’s view:
Irrational: they cannot be derived from first principles. Accidental: they arise from the ancestral environment in which humans evolved. Inalienable: You can’t get jettison them for arbitrary values, your philosophy must ultimately reconcile your stated values with your innate ones[1] Fragile: because human values are a small subset of high dimensional intersections, they are subject to be destroyed by even small perturbations.
All of these attributes are just obvious consequences of his metaphysics so he doesn’t attempt to justify any of it in the sequence you linked. Why would he? It’s obvious. He’s more interested in examining the consequences of these attributes on civilizational policy.
“You do have values, even when you’re trying to be “cosmopolitan”, trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background—they are less obviously human. Your brain probably won’t even generate an alternative so awful that it would wake you up, make you say “No! Something went wrong!” even at your most cosmopolitan. E.g. “a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips”. You’ll just imagine strange alien worlds to appreciate.
Trying to be “cosmopolitan”—to be a citizen of the cosmos—just strips off a surface veneer of goals that seem obviously “human”.”
I think Eliezer is a moral antirealist but[see later comment] not a moral fictionalist. Although I’m not 100% sure I’m correctly understanding what “moral fictionalism” is.
If you don’t like Eliezer’s arguments for moral antirealism, the OP is also a moral antirealist, and has written a ton about it, e.g. this post.
“[Eliezer] doesn’t attempt to justify…” strikes me as a pretty ridiculous thing to say, given all he’s written on that topic (e.g. metaethics sequence). You can say that he failed to justify those things, if that’s what you believe. Or you can that he didn’t even attempt to address the particular aspects / cruxes / counterarguments that seem obviously central and critical from your perspective. But that’s different from not even trying. I think he has tried in good faith, regardless of how it turned out. Note that different readers have different aspects / cruxes / counterarguments that seem obviously central and critical from their own perspective. Communication is hard.
they exist only in human minds, and are not in any way objective or discoverable
That description seems weird, because human minds come from human brains, and human brains are actual objective discoverable objects that exist in the world.
For example:
I think Eliezer would say that friendship is probably part of what is “right”.
Separately, I claim that a drive for friendship (and related thoughts and behaviors) were installed in the human brain by evolution acting on the genomes of humans and our ancestors (I’m confident Eliezer also believes that).
The second thing is an “objective and discoverable” scientific hypothesis, right? But these two bullet points are not unrelated. Quite the contrary, I think Eliezer would say that what is “right” is related to what happens when humans use their brain to think and reflect, and so the fact that these brains include an innate friendship drive is very relevant! Do you see what I mean?
(Note: I’m trying to relay Eliezer’s position without endorsing it; actually I think you would find my own perspective even more disagreeable than his, see here.)
I don’t disagree with Eliezer’s position for the most part, I just don’t see where he lays out a coherent foundation for why he believes certain things about human values. (Or maybe I’m just being uncharitable in my evaluation and not counting some things as “real arguments” that others would.)
By objective and discoverable, I meant in the sense of the values understood in and of themselves without reference to humans in particular. Obviously you can just model human brains and understand what they value, but I meant that you can’t learn about “beauty” or “friendship” or what have you outside of that. That part of the post was inelegantly worded, and I’d probably strike that out if this was a long post and not a comment.
I used “Moral Fictionalist” as a descriptor for Eliezer’s position because, although he probably wouldn’t ascribe it to himself, it seems to me to be the best fit for it. I’m not a rationalist, and I don’t have a rationalist background, I just like to read the site from time to time, and very occasionally comment. So my diction tends to sound “foreign” here.
I’m not sure what this is referring to…. I used the term “moral antirealist” and you used the term “moral fictionalist”, both of which are philosophy jargon, not Eliezer / rationalist jargon. (I assume you were using “moral fictionalist” in the standard philosophy jargon sense, right?)
Anyway, I have just read more of that SEP article, but remain confused by it. For example, if Mathematical Theorem X is provable from Axiom Set Y, is Theorem X “fictional” or not? My impression from the SEP thing is that philosophers sometimes argue about this question. But I don’t understand the nature of that argument. What’s at stake? Why can’t I just say “who cares, call it whatever you want to call it”.
I bring up that example because I think Eliezer sees “X is good” claims as being pretty analogous to “Theorem X is provable from Axiom Set Y” claims. Specifically, the analogy would be:
Axiom Set Y <--> the innate motivations and inclinations in human brains (ignoring interpersonal differences, which he views as sufficiently minor that this is OK to ignore)
Mathematical inference steps <--> the stuff that happens when people use their innate motivations and inclinations, along with their knowledge and reasoning abilitites, to reflect on the nature of The Good. Well, actually, some idealization of that.
“Theorem X is provable from Axiom Set Y” <--> such-and-such thing is Good
OK, if we accept all those parts of (my attempt to relay) Eliezer’s view, then is the statement “X is Good” a “fiction” in the moral fictionalist sense? I still think the answer is no, but I’m not very confident.
(You’re correct. I was using fictionalist in that sense.)
I think the equivocation of “Theorem X is provable from Axiom Set Y” <--> such-and-such thing is Good; would be the part of that chain of reasoning a self-described fictionalist would ascribe fictionality to.
As I understand it, it’s the difference between thinking that Good is a real feature of the universe and Good being a wordgame that we play to make certain ideas easier to work with. Maybe a different example could illuminate some of this.
Fictionalism would be a good tool to describe the way we talk about Evolution and Nature. As has sometimes been said on this site, humans are not aligned towards Evolution, since they aren’t inclusive fitness maximizers. We also say things like: such-and-such a feature evolved to do X function on an organism. Of course, that’s not true. Biological features don’t evolve in order to do a thing, they just happen to do things as a consequence of surviving in an ancestral environment.
We talk about organs and limbs “evolving to do” things, even when they do not, because it is a fiction that makes Evolution more palatable to intellectual examination, but unless you belief in weird stuff like teleology, it’s just a fiction, a story that is convenient, and corresponds to real features of the world, but is not itself strictly true. And it is not untrue in a provisional way that we expect to be overturned with later reasoning and evidence, but untrue by design, because the literal truth of biological features arising by chance and operating by chance is harder to talk coherently about, given human constraints on mental compute.
I think your presentation of Eliezer’s view is like that: one way it differs from a moral realist is not only that of a category error (objective morality vs aligning to human value) but that of a thought pattern deliberately constructed to aid human cognition vs a thought pattern attempting to align closely with correct mathmatical model of the object(s).
That’s my reading of why it would matter if you’re a moral antirealist (classical) vs a moral antirealist (fictionalist). I do consider fictionalist to be a subset of antirealist.
Update: I said above that Eliezer is a moral antirealist but upon further reflection I think I was wrong. (I’m still not confident though.) This post in particular strikes me as moral realist:
It may be that humans argue about what’s right, and Pebblesorters do what’s prime. But this doesn’t change what’s right, and it doesn’t make what’s right vary from planet to planet, and it doesn’t mean that the things we do are right in mere virtue of our deciding on them—any more than Pebblesorters make a heap prime or not prime by deciding that it’s “correct”.
The Pebblesorters aren’t trying to do what’s p-prime any more than humans are trying to do what’s h-prime. The Pebblesorters are trying to do what’s prime. And the humans are arguing about, and occasionally even really trying to do, what’s right.
I think he treats “what’s right” as having a status similar to “what’s provable from Axiom Set Y”. He thinks there are (something akin to) axioms for morality, and these axioms are downstream of random facts about human evolution; but he bundles those human-specific axioms into the definition of the word “right”.
In other words, Eliezer could have talked about “what’s righthuman brains” instead of “what’s right” (with the same definition, i.e. “what’s righthuman brains” is something like the limit of idealized human moral reflection, definitely not “what actual humans say is right today”), in which case he would have been a moral antirealist. In fact, I think he could have done that with almost no substantive change to anything he wrote in the metaethics sequence. But as written, I think Eliezer is closer to moral realist.
I’m not a philosopher and might be misunderstanding the terminology here. I’m also not Eliezer :)
“Fictionalist” seems to imply that human moral values are arbitrary, free creations. EY seems to be an anti realist , as far as basic ontology goes, but he also emphasizes that you can’t think outside of the human value structure, that they will always be compelling and seemingly real to you.
To me, that’s a quasi-realist position.
Human values are, in Eliezer’s view:Irrational: they cannot be derived from first principles.Accidental: they arise from the ancestral environment in which humans evolved.Inalienable: You can’t get jettison them for arbitrary values, your philosophy must ultimately reconcile your stated values with your innate ones[1]Fragile: because human values are a small subset of high dimensional intersections, they are subject to be destroyed by even small perturbations
Is there a reason for that?
No. Yudkowski is a moral fictionalist, but he has never (to my knowledge) ever justified his position. Granted I haven’t read his whole corpus of work, but from what I’ve seen he just takes it as a given.
This is false
“Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries?”
“No free lunch. You want a wonderful and mysterious universe? That’s your value.”
“These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.”
”Touch too hard in the wrong dimension, and the physical representation of those values will shatter—and not come back, for there will be nothing left to want to bring it back.”
I’ve chosen a small representation of the sort of things that Eliezer says about human values. When I call Eliezer a moral fictionalist, I don’t mean that he doesn’t think human values are real, just that they are real in the way that fictional stories are real, ie. that they exist only in human minds, and are not in any way objective or discoverable.
Human values are, in Eliezer’s view:
Irrational: they cannot be derived from first principles.
Accidental: they arise from the ancestral environment in which humans evolved.
Inalienable: You can’t get jettison them for arbitrary values, your philosophy must ultimately reconcile your stated values with your innate ones[1]
Fragile: because human values are a small subset of high dimensional intersections, they are subject to be destroyed by even small perturbations.
All of these attributes are just obvious consequences of his metaphysics so he doesn’t attempt to justify any of it in the sequence you linked. Why would he? It’s obvious. He’s more interested in examining the consequences of these attributes on civilizational policy.
“You do have values, even when you’re trying to be “cosmopolitan”, trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background—they are less obviously human. Your brain probably won’t even generate an alternative so awful that it would wake you up, make you say “No! Something went wrong!” even at your most cosmopolitan. E.g. “a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips”. You’ll just imagine strange alien worlds to appreciate.
Trying to be “cosmopolitan”—to be a citizen of the cosmos—just strips off a surface veneer of goals that seem obviously “human”.”
I think Eliezer is
a moral antirealist but[see later comment] not a moral fictionalist. Although I’m not 100% sure I’m correctly understanding what “moral fictionalism” is.If you don’t like Eliezer’s arguments for moral antirealism, the OP is also a moral antirealist, and has written a ton about it, e.g. this post.
“[Eliezer] doesn’t attempt to justify…” strikes me as a pretty ridiculous thing to say, given all he’s written on that topic (e.g. metaethics sequence). You can say that he failed to justify those things, if that’s what you believe. Or you can that he didn’t even attempt to address the particular aspects / cruxes / counterarguments that seem obviously central and critical from your perspective. But that’s different from not even trying. I think he has tried in good faith, regardless of how it turned out. Note that different readers have different aspects / cruxes / counterarguments that seem obviously central and critical from their own perspective. Communication is hard.
That description seems weird, because human minds come from human brains, and human brains are actual objective discoverable objects that exist in the world.
For example:
I think Eliezer would say that friendship is probably part of what is “right”.
Separately, I claim that a drive for friendship (and related thoughts and behaviors) were installed in the human brain by evolution acting on the genomes of humans and our ancestors (I’m confident Eliezer also believes that).
The second thing is an “objective and discoverable” scientific hypothesis, right? But these two bullet points are not unrelated. Quite the contrary, I think Eliezer would say that what is “right” is related to what happens when humans use their brain to think and reflect, and so the fact that these brains include an innate friendship drive is very relevant! Do you see what I mean?
(Note: I’m trying to relay Eliezer’s position without endorsing it; actually I think you would find my own perspective even more disagreeable than his, see here.)
Thanks for the reply.
I don’t disagree with Eliezer’s position for the most part, I just don’t see where he lays out a coherent foundation for why he believes certain things about human values. (Or maybe I’m just being uncharitable in my evaluation and not counting some things as “real arguments” that others would.)
By objective and discoverable, I meant in the sense of the values understood in and of themselves without reference to humans in particular. Obviously you can just model human brains and understand what they value, but I meant that you can’t learn about “beauty” or “friendship” or what have you outside of that. That part of the post was inelegantly worded, and I’d probably strike that out if this was a long post and not a comment.
I used “Moral Fictionalist” as a descriptor for Eliezer’s position because, although he probably wouldn’t ascribe it to himself, it seems to me to be the best fit for it. I’m not a rationalist, and I don’t have a rationalist background, I just like to read the site from time to time, and very occasionally comment. So my diction tends to sound “foreign” here.
Thanks!
I’m not sure what this is referring to…. I used the term “moral antirealist” and you used the term “moral fictionalist”, both of which are philosophy jargon, not Eliezer / rationalist jargon. (I assume you were using “moral fictionalist” in the standard philosophy jargon sense, right?)
Anyway, I have just read more of that SEP article, but remain confused by it. For example, if Mathematical Theorem X is provable from Axiom Set Y, is Theorem X “fictional” or not? My impression from the SEP thing is that philosophers sometimes argue about this question. But I don’t understand the nature of that argument. What’s at stake? Why can’t I just say “who cares, call it whatever you want to call it”.
I bring up that example because I think Eliezer sees “X is good” claims as being pretty analogous to “Theorem X is provable from Axiom Set Y” claims. Specifically, the analogy would be:
Axiom Set Y <--> the innate motivations and inclinations in human brains (ignoring interpersonal differences, which he views as sufficiently minor that this is OK to ignore)
Mathematical inference steps <--> the stuff that happens when people use their innate motivations and inclinations, along with their knowledge and reasoning abilitites, to reflect on the nature of The Good. Well, actually, some idealization of that.
“Theorem X is provable from Axiom Set Y” <--> such-and-such thing is Good
OK, if we accept all those parts of (my attempt to relay) Eliezer’s view, then is the statement “X is Good” a “fiction” in the moral fictionalist sense? I still think the answer is no, but I’m not very confident.
(You’re correct. I was using fictionalist in that sense.)
I think the equivocation of “Theorem X is provable from Axiom Set Y” <--> such-and-such thing is Good; would be the part of that chain of reasoning a self-described fictionalist would ascribe fictionality to.
As I understand it, it’s the difference between thinking that Good is a real feature of the universe and Good being a wordgame that we play to make certain ideas easier to work with. Maybe a different example could illuminate some of this.
Fictionalism would be a good tool to describe the way we talk about Evolution and Nature. As has sometimes been said on this site, humans are not aligned towards Evolution, since they aren’t inclusive fitness maximizers. We also say things like: such-and-such a feature evolved to do X function on an organism. Of course, that’s not true. Biological features don’t evolve in order to do a thing, they just happen to do things as a consequence of surviving in an ancestral environment.
We talk about organs and limbs “evolving to do” things, even when they do not, because it is a fiction that makes Evolution more palatable to intellectual examination, but unless you belief in weird stuff like teleology, it’s just a fiction, a story that is convenient, and corresponds to real features of the world, but is not itself strictly true. And it is not untrue in a provisional way that we expect to be overturned with later reasoning and evidence, but untrue by design, because the literal truth of biological features arising by chance and operating by chance is harder to talk coherently about, given human constraints on mental compute.
I think your presentation of Eliezer’s view is like that: one way it differs from a moral realist is not only that of a category error (objective morality vs aligning to human value) but that of a thought pattern deliberately constructed to aid human cognition vs a thought pattern attempting to align closely with correct mathmatical model of the object(s).
That’s my reading of why it would matter if you’re a moral antirealist (classical) vs a moral antirealist (fictionalist). I do consider fictionalist to be a subset of antirealist.
Update: I said above that Eliezer is a moral antirealist but upon further reflection I think I was wrong. (I’m still not confident though.) This post in particular strikes me as moral realist:
I think he treats “what’s right” as having a status similar to “what’s provable from Axiom Set Y”. He thinks there are (something akin to) axioms for morality, and these axioms are downstream of random facts about human evolution; but he bundles those human-specific axioms into the definition of the word “right”.
In other words, Eliezer could have talked about “what’s righthuman brains” instead of “what’s right” (with the same definition, i.e. “what’s righthuman brains” is something like the limit of idealized human moral reflection, definitely not “what actual humans say is right today”), in which case he would have been a moral antirealist. In fact, I think he could have done that with almost no substantive change to anything he wrote in the metaethics sequence. But as written, I think Eliezer is closer to moral realist.
I’m not a philosopher and might be misunderstanding the terminology here. I’m also not Eliezer :)
“Fictionalist” seems to imply that human moral values are arbitrary, free creations. EY seems to be an anti realist , as far as basic ontology goes, but he also emphasizes that you can’t think outside of the human value structure, that they will always be compelling and seemingly real to you. To me, that’s a quasi-realist position.
All that adds up to the evolutionary view.
I’ve read his writings on the subject , without being able to make much sense of them. Including the justification of the claim in question.