But it’s not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.
But your initial mix of evolved and encultured values are all you have to go on. There is no other source of values or intuitions. Even if you decide that you disagree with a value, you’re using other evolved or encultured intuitions to decide this. There is literally nothing can use except these. A person who abandons their religious faith after some thought is using the value “rational thought” against “religious belief.” This person was lucky enough to have “rational thought” instilled by someone as a value, and have it be strong enough to beat “religious belief.” The only way to change your value system is by using your value system to reflect upon your value system.
The only way to change your value system is by using your value system to reflect upon your value system.
I agree with the message of your post and I up-voted it, but this sentence isn’t technically true. Outside forces that aren’t dependant on your value system can change your value system too. For example if you acquire a particular behaviour altering parasite or ingest substances that alter your hormone mix. This is ignoring things like you losing your memory or Omega deciding to rewire your
Our values are fragile, some see this as a reason to not be too concerned with them. I find this a rationalization similar to the ones use to deal with the fragility of life itself. Value deathism has parallel arguments to deathism.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
Yeah I totally did, it fit my previous thinking (was very into Nietzsche a few years back too) and I’ve been building on it since.
Since this is I think the second time you’ve made a comment like this I’m wondering why exactly you feel the need to point this out. I mean surely you realize you’ve stolen stuff from me too right? And we both stole loads from a whole bunch of other people. Is this kind of like a bonding fist bump of a call for me to name drop you more?
Those who read our public exchanges know we are on good terms and that I like your stuff, not sure what more name dropping would do for you beyond that, especially since this is material from our private email exchanges and not a public article I can link to. If I recall the exchange the idea was inspired by a one line reply you made in a long conversation, so its not exactly something easily quotable either.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
Is this why people like Nietzsche, or do most people who like Nietzsche have different reasons?
Our values are fragile, some see this as a reason to not be too concerned with them.
I think it really depends on the exact value change we’re talking about. There’s an analogue for death/aging—you’d probably greatly prefer aging another 10 years, then being frozen at that biological age forever, over aging and dying normally. In the same way, I might not consider a small drift in apparently unimportant values to big a deal in the grand scheme of things, and might not choose to spend resources guarding against this (slippery slope scenarios aside).
In practice, people don’t seem to be that concerned with guarding against small value changes. They do things like travel to new places, make new friends, read books, change religions, etc., all of which are likely to change what they value, often in unpredictable ways.
But your initial mix of evolved and encultured values are all you have to go on.
I don’t think this statement is expressing a factual question. If it is, hopefully “I could generate values randomly” is a workable counterargument.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the “initial” one that is “all I have to go on”?
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?
Abdul grew up in an extremist Pakistani village, but on reflection, he’s against honor killings. Bruce grew up in England, but on reflection, he’s in favor of honor killings. What do you say to each?
I think most LW readers don’t see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every “bad” thing anyone has done ever, if we’re to believe that everyone’s the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones “evolved” values would be considered a scary starting point, I suspect.
Infuriation and “righteous anger” are evolved intuitions; I assume most of us are past the point of endorsing “righteous anger” as being righteous/moral.
A person who abandons their religious faith after some thought is using the value “rational thought” against “religious belief.” This person was lucky enough to have “rational thought” instilled by someone as a value, and have it be strong enough to beat “religious belief.”
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
I think most LW readers don’t see much sacrosanct about evolved values
Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.
Let’s try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for… maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values… and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.
What is your reaction on this? Some values were replaced by some other values—thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between “human hell” and “human paradise” scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?
(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments?
It’s only “wrong” in the sense that I don’t want it, i.e. it doesn’t accord with my values. I don’t see the need to mention the fact that they may have been affected by evolution.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set >of values is the “initial” one that is “all I have to go on”?
Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one’s current value system to do so. The human value system is dynamic.
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them?
Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my “rational thought” value, to be safe and not dogma.
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
My “rational thought” value tells me it’s an “is” question, but most people seem to consider it a value question.
But your initial mix of evolved and encultured values are all you have to go on. There is no other source of values or intuitions. Even if you decide that you disagree with a value, you’re using other evolved or encultured intuitions to decide this. There is literally nothing can use except these. A person who abandons their religious faith after some thought is using the value “rational thought” against “religious belief.” This person was lucky enough to have “rational thought” instilled by someone as a value, and have it be strong enough to beat “religious belief.” The only way to change your value system is by using your value system to reflect upon your value system.
I agree with the message of your post and I up-voted it, but this sentence isn’t technically true. Outside forces that aren’t dependant on your value system can change your value system too. For example if you acquire a particular behaviour altering parasite or ingest substances that alter your hormone mix. This is ignoring things like you losing your memory or Omega deciding to rewire your
Our values are fragile, some see this as a reason to not be too concerned with them. I find this a rationalization similar to the ones use to deal with the fragility of life itself. Value deathism has parallel arguments to deathism.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
You totally stole that from me!
Yeah I totally did, it fit my previous thinking (was very into Nietzsche a few years back too) and I’ve been building on it since.
Since this is I think the second time you’ve made a comment like this I’m wondering why exactly you feel the need to point this out. I mean surely you realize you’ve stolen stuff from me too right? And we both stole loads from a whole bunch of other people. Is this kind of like a bonding fist bump of a call for me to name drop you more?
Those who read our public exchanges know we are on good terms and that I like your stuff, not sure what more name dropping would do for you beyond that, especially since this is material from our private email exchanges and not a public article I can link to. If I recall the exchange the idea was inspired by a one line reply you made in a long conversation, so its not exactly something easily quotable either.
Is this why people like Nietzsche, or do most people who like Nietzsche have different reasons?
I think it really depends on the exact value change we’re talking about. There’s an analogue for death/aging—you’d probably greatly prefer aging another 10 years, then being frozen at that biological age forever, over aging and dying normally. In the same way, I might not consider a small drift in apparently unimportant values to big a deal in the grand scheme of things, and might not choose to spend resources guarding against this (slippery slope scenarios aside).
In practice, people don’t seem to be that concerned with guarding against small value changes. They do things like travel to new places, make new friends, read books, change religions, etc., all of which are likely to change what they value, often in unpredictable ways.
I don’t think this statement is expressing a factual question. If it is, hopefully “I could generate values randomly” is a workable counterargument.
It’s also not even clear quite what you mean by “initial” mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the “initial” one that is “all I have to go on”?
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?
Abdul grew up in an extremist Pakistani village, but on reflection, he’s against honor killings. Bruce grew up in England, but on reflection, he’s in favor of honor killings. What do you say to each?
I think most LW readers don’t see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every “bad” thing anyone has done ever, if we’re to believe that everyone’s the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones “evolved” values would be considered a scary starting point, I suspect.
Infuriation and “righteous anger” are evolved intuitions; I assume most of us are past the point of endorsing “righteous anger” as being righteous/moral.
Do you consider God’s existence to be an “is” factual question or an “ought” values question? I consider it a factual question myself.
Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.
Let’s try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for… maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values… and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.
What is your reaction on this? Some values were replaced by some other values—thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between “human hell” and “human paradise” scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?
(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)
It’s only “wrong” in the sense that I don’t want it, i.e. it doesn’t accord with my values. I don’t see the need to mention the fact that they may have been affected by evolution.
Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one’s current value system to do so. The human value system is dynamic.
Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my “rational thought” value, to be safe and not dogma.
My “rational thought” value tells me it’s an “is” question, but most people seem to consider it a value question.
But why would you do that if your existing value system wouldn’t find that a good idea?
I wouldn’t do that. You misunderstood my response. I said that was my response if he was trying to make an empirical assertion.