Where do you include environmental and cultural influences?
While these vary, I don’t see legitimate values that could be affected by them. Could you provide examples of such values?
This does not follow. Maybe you need to give some examples. What do you mean by “correct” and “error” here?
Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value. I mean error in the sense, for example, that Eliezer employs in Coherent Extrapolated Volition: that error that comes from insufficient intelligence in thinking about our values.
This is a contentious attempt to convert everything to hedons. People have multiple contradictory impulses, desires and motives which shape their actions, often not by “maximizing good feelings”.
Except in the aforementioned sense or error, could you provide examples of legitimate values that don’t reduce to good and bad feelings?
Really? Been to the Youtube and other video sites lately?
I think that literature about masochism is of more evidence than youtube videos, that could be isolated incidents of people who are not regularly masochist. If you have evidence from those sites, I’d like to see it.
This is wrong in so many ways, unless you define reality as “conscious experiences in themselves”, which is rather non-standard. In any case, unless you are a dualist, you can probably agree that your conscious experiences can be virtual as much as anything else.
Even being virtual, or illusive, they would still be real occurrences, and real illusions, being directly felt. I mean that in the sense of Nick Bostrom’s simulation argument.
Uhh, that post sucked as well.
Perhaps it was not sufficiently explained, but check this introduction on Less Wrong, then, or the comment I made below about it:
I read many sequences, understand them well, and assure you that, if this post seems not to make sense, then it is because it was not explained in sufficient length.
Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value.
The two can’t be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.
What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory.
They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of them correct and not the other? Could both values be correct? The issue here is questioning if any values whatsoever can be validly held for similar beings, or if a good justification is needed. In CEV, Eliezer proposed that we can make errors about our values, and that they should be extrapolated for the reasonings we would make if we had higher intelligence.
While these vary, I don’t see legitimate values that could be affected by them. Could you provide examples of such values?
Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value. I mean error in the sense, for example, that Eliezer employs in Coherent Extrapolated Volition: that error that comes from insufficient intelligence in thinking about our values.
Except in the aforementioned sense or error, could you provide examples of legitimate values that don’t reduce to good and bad feelings?
I think that literature about masochism is of more evidence than youtube videos, that could be isolated incidents of people who are not regularly masochist. If you have evidence from those sites, I’d like to see it.
Even being virtual, or illusive, they would still be real occurrences, and real illusions, being directly felt. I mean that in the sense of Nick Bostrom’s simulation argument.
Perhaps it was not sufficiently explained, but check this introduction on Less Wrong, then, or the comment I made below about it:
http://lesswrong.com/lw/19d/the_anthropic_trilemma/
I read many sequences, understand them well, and assure you that, if this post seems not to make sense, then it is because it was not explained in sufficient length.
The two can’t be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.
What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory.
They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of them correct and not the other? Could both values be correct? The issue here is questioning if any values whatsoever can be validly held for similar beings, or if a good justification is needed. In CEV, Eliezer proposed that we can make errors about our values, and that they should be extrapolated for the reasonings we would make if we had higher intelligence.