randallsquared
But it’s actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it’s still possible that the Room isn’t understanding anything, even if you don’t regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn’t necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
no immortal horses, imagine that.
No ponies or friendship? Hard to imagine, indeed. :|
Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you’d make today. I have the feeling you meant that the other way ’round, though.
In the context of
But when this is phrased as “the set of minds included in CEV is totally arbitrary, and hence, so will be the output,” an essential truth is lost
I think it’s clear that with
valuing others’ not having abortions loses to their valuing choice
you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said “valuing choice loses to others valuing their life”.
But, to be clear, I don’t think that even if you limit it to “existing, thinking human minds at the time of the calculation”, you will get some sort of unambiguous result.
A very common desire is to be more prosperous than one’s peers. It’s not clear to me that there is some “real” goal that this serves (for an individual) -- it could be literally a primary goal. If that’s the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can’t think of any satisfactory solution to this. Now, one might say, “well, if they’d grown up farther together this would be solvable”, but I don’t see any reason that should be true. People don’t necessarily grow more altruistic as they “grow up”, so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI’s need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.
The point you quoted is my main objection to CEV as well.
You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.
Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of “knowing more about [...] ethics” either presumes an objective ethics or merely points at you or where you wish you were.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
Would making paperclips become valuable if we created a paperclip maximiser?
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
Just to be clear, I don’t think you’re disagreeing with me.
I’m asking about how to efficiently signal actual pacifism.
I’m not asking about faking pacifism. I’m asking about how to efficiently signal actual pacifism. How else am I supposed to ask about that?
Replace “serious injury or death” with “causing serious injury or death”.
If God doesn’t exist, then there is no way to know what He would want, so the replacement has no actual moral rules.
When you consider this, consider the difference between our current world (with all the consequences for those of IQ 85), and a world where 85 was the average, so that civilization and all its comforts never developed at all...
When people say that it’s conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word “feel” in a way that I don’t understand or care about.
Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I’m not sure what you mean.
people can see after 30 years that the idea [of molecular manufacturing] turned out sterile.
Did I miss the paper where it was shown not to be workable, or are you basing this only on the current lack of assemblers?
Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.
Can you point out why the analogy is bad?
I’ve read over one hundred books I think were better. And I mean that literally; if I spent a day doing it, I could actually go through my bookshelves and write down a list of one hundred and one books I liked more.
I’ve read many, many books I liked more than many books which I would consider “better” in a general sense. From the context of the discussion, I’d think “were better” was the meaning you meant. Alternatively, maybe you don’t experience such a discrepancy between what you like and what you believe is “good writing”?
...but people (around me, at least, in the DC area) do say “Er...” literally, sometimes. It appears to be pronounced that way when the speaker wants to emphasize the pause, as far as I can tell.