But your initial mix of evolved and encultured values are all you have to go on.
I don’t think this statement is expressing a factual question. If it is, hopefully “I could generate values randomly” is a workable counterargument.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the “initial” one that is “all I have to go on”?
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?
Abdul grew up in an extremist Pakistani village, but on reflection, he’s against honor killings. Bruce grew up in England, but on reflection, he’s in favor of honor killings. What do you say to each?
I think most LW readers don’t see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every “bad” thing anyone has done ever, if we’re to believe that everyone’s the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones “evolved” values would be considered a scary starting point, I suspect.
Infuriation and “righteous anger” are evolved intuitions; I assume most of us are past the point of endorsing “righteous anger” as being righteous/moral.
A person who abandons their religious faith after some thought is using the value “rational thought” against “religious belief.” This person was lucky enough to have “rational thought” instilled by someone as a value, and have it be strong enough to beat “religious belief.”
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
I think most LW readers don’t see much sacrosanct about evolved values
Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.
Let’s try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for… maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values… and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.
What is your reaction on this? Some values were replaced by some other values—thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between “human hell” and “human paradise” scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?
(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments?
It’s only “wrong” in the sense that I don’t want it, i.e. it doesn’t accord with my values. I don’t see the need to mention the fact that they may have been affected by evolution.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set >of values is the “initial” one that is “all I have to go on”?
Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one’s current value system to do so. The human value system is dynamic.
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them?
Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my “rational thought” value, to be safe and not dogma.
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
My “rational thought” value tells me it’s an “is” question, but most people seem to consider it a value question.
I don’t think this statement is expressing a factual question. If it is, hopefully “I could generate values randomly” is a workable counterargument.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the “initial” one that is “all I have to go on”?
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?
Abdul grew up in an extremist Pakistani village, but on reflection, he’s against honor killings. Bruce grew up in England, but on reflection, he’s in favor of honor killings. What do you say to each?
I think most LW readers don’t see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every “bad” thing anyone has done ever, if we’re to believe that everyone’s the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones “evolved” values would be considered a scary starting point, I suspect.
Infuriation and “righteous anger” are evolved intuitions; I assume most of us are past the point of endorsing “righteous anger” as being righteous/moral.
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.
Let’s try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for… maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values… and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.
What is your reaction on this? Some values were replaced by some other values—thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between “human hell” and “human paradise” scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?
(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)
It’s only “wrong” in the sense that I don’t want it, i.e. it doesn’t accord with my values. I don’t see the need to mention the fact that they may have been affected by evolution.
Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one’s current value system to do so. The human value system is dynamic.
Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my “rational thought” value, to be safe and not dogma.
My “rational thought” value tells me it’s an “is” question, but most people seem to consider it a value question.
But why would you do that if your existing value system wouldn’t find that a good idea?
I wouldn’t do that. You misunderstood my response. I said that was my response if he was trying to make an empirical assertion.