That is something we worry about from time to time, but in this case I think the downvotes are justified. Tim Tyler has been repeating a particular form of techno-optimism for quite a while, which is fine; it’s good to have contrarians around.
However, in the current thread, I don’t think he’s taking the critique seriously enough. It’s been pointed out that he’s essentially searching for reasons that even a Paperclipper would preserve everything of value to us, rather than just putting himself in Clippy’s place and really asking for the most efficient way to maximize paperclips. (In particular, preserving the fine details of a civilization, let alone actual minds from it, is really too wasteful if your goal is to be prepared for a wide array of possible alien species.)
I feel (and apparently, so do others) that he’s just replying with more arguments of the same kind as the ones we generally criticize, rather than finding other types of arguments or providing a case why anthropomorphic optimism doesn’t apply here.
In any case, thanks for the laugh line:
You went over some peoples heads.
My analysis of Tim Tyler in this thread isn’t very positive, but his replies seem quite clear to me; I’m frustrated on the meta-level rather than the object-level.
It’s been pointed out that he’s essentially searching for reasons that even a Paperclipper would preserve everything of value to us, rather than just putting himself in Clippy’s place and really asking for the most efficient way to maximize paperclips.
I don’t think that a paperclip maximiser would “preserve everything of value to us” in the first place. What I actually said at the beginning was:
TT: I figure a fair amount of modern heritable information (such as morals) will not be lost.
Not everything. Things are constantly being lost.
In particular, preserving the fine details of a civilization, let alone actual minds from it, is really too wasteful if your goal is to be prepared for a wide array of possible alien species.
TT: it is a common drive for intelligent agents to make records of their pasts—so that they can predict the consequences of their actions in the future.
We do, in fact, have detailed information about how much our own civilisation is prepared to spend on preserving its own history. We preserve many things which are millions of years old—and which take up far more resources than a human. For example, see how this museum dinosaur dwarfs the humans in the foreground. We have many such exhibits—and we are still a planet-bound civilisation. Our descendants seem likely to have access to much greater resources—and so may devote a larger quantity of absolute resources to museums.
So: that’s the basis of my estimate. What is the basis of your estimate?
The real dichotomy here is
“maximising evaluation function”
versus
“maximising probability of positive evaluation function”
In paperclip making, or better, the game of Othello/Reversi, there are choices like this:
80% chance of winning 60-0, versus
90% chance of winning 33-31.
The first maximises the winning, and is similar to a paperclip maker consuming the entire universe.
The second maximises the probability of succeeding, and is similar to a paperclip maker avoiding being annihilated by aliens or other unknown forces.
Mathematically, the first is similar to finding the shortest program in Kolmogorov Complexity, while the second is similar to integrating over programs.
So, friendly AI is surely of the second kind, while insane AI is of the first kind.
I guess you down-voters of me felt quite rational when doing so.
And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:
There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.
My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction instead, from people I guess have not even made a decent game playing AI, but nevertheless have strong opinions on how they must be.
So, for getting intelligent rational arguments on AI, this community is useless, as opposed to Yudkowsky, Schmidhuber, Hansen, Tyler, etc. which has shown on their own sites that they have something to contribute.
To get real results in AI and rationality, I do my own math and science.
Your Othello Reversi example is fundamentally flawed, but it may not seem like it unless you realize that at LW the tradition is to say that utility is linear in paperclips to Clippy. That may be our fault, but there’s your explanation. “Winning 60-0”, to us using our jargon, is equivalent to one paperclip, not 60. And “winning 33-31″ is also equivalent to one paperclip, not 33. (or they’re both equivalent to x paperclips, whatever)
So when I read your example, I read it as “80% chance of 1 paperclip, or 90% chance of 1 paperclip”.
I’m sure it’s very irritating to have your statement miscommunicated because of a jargon difference (paperclip = utility rather than f(paperclip) = utility)! I encourage you to post anyway, and begin with the assumption that we misunderstand you rather than the assumption that we are “fake rationalists”, but realize that in the current environment (unfortunately or not, but there it is) the burden of communication is on the poster.
While most of this of this seems sensible, I don’t understand how your last sentence follows. I have heard similar strategies suggested to reduce the probability of paperclipping, but it seems like if we actually succeed in producing a true friendly AI, the quantity it tries to maximize (expected winning, P(winning), or something else) will depend on how we evaluate outcomes.
This made some sense to me, at least to the point where I’d expect an intelligent refutation from disagreers, and seems posted in good faith. What am I missing about the voting system? Or about this post.
You got voted down because you were rational. You went over some peoples heads.
These are popularity points, not rationality points.
That is something we worry about from time to time, but in this case I think the downvotes are justified. Tim Tyler has been repeating a particular form of techno-optimism for quite a while, which is fine; it’s good to have contrarians around.
However, in the current thread, I don’t think he’s taking the critique seriously enough. It’s been pointed out that he’s essentially searching for reasons that even a Paperclipper would preserve everything of value to us, rather than just putting himself in Clippy’s place and really asking for the most efficient way to maximize paperclips. (In particular, preserving the fine details of a civilization, let alone actual minds from it, is really too wasteful if your goal is to be prepared for a wide array of possible alien species.)
I feel (and apparently, so do others) that he’s just replying with more arguments of the same kind as the ones we generally criticize, rather than finding other types of arguments or providing a case why anthropomorphic optimism doesn’t apply here.
In any case, thanks for the laugh line:
My analysis of Tim Tyler in this thread isn’t very positive, but his replies seem quite clear to me; I’m frustrated on the meta-level rather than the object-level.
I don’t think that a paperclip maximiser would “preserve everything of value to us” in the first place. What I actually said at the beginning was:
Not everything. Things are constantly being lost.
What I said here was:
We do, in fact, have detailed information about how much our own civilisation is prepared to spend on preserving its own history. We preserve many things which are millions of years old—and which take up far more resources than a human. For example, see how this museum dinosaur dwarfs the humans in the foreground. We have many such exhibits—and we are still a planet-bound civilisation. Our descendants seem likely to have access to much greater resources—and so may devote a larger quantity of absolute resources to museums.
So: that’s the basis of my estimate. What is the basis of your estimate?
I agree with your criticism, but I doubt that good will come of replying to a comment like the one you’re replying to here, I’m afraid.
Fair enough; I should have replied to Tim directly, but couldn’t pass up the laugh-line bit.
The real dichotomy here is “maximising evaluation function” versus “maximising probability of positive evaluation function”
In paperclip making, or better, the game of Othello/Reversi, there are choices like this:
80% chance of winning 60-0, versus 90% chance of winning 33-31.
The first maximises the winning, and is similar to a paperclip maker consuming the entire universe. The second maximises the probability of succeeding, and is similar to a paperclip maker avoiding being annihilated by aliens or other unknown forces.
Mathematically, the first is similar to finding the shortest program in Kolmogorov Complexity, while the second is similar to integrating over programs.
So, friendly AI is surely of the second kind, while insane AI is of the first kind.
I guess you down-voters of me felt quite rational when doing so.
And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:
There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.
My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction instead, from people I guess have not even made a decent game playing AI, but nevertheless have strong opinions on how they must be.
So, for getting intelligent rational arguments on AI, this community is useless, as opposed to Yudkowsky, Schmidhuber, Hansen, Tyler, etc. which has shown on their own sites that they have something to contribute.
To get real results in AI and rationality, I do my own math and science.
Your Othello Reversi example is fundamentally flawed, but it may not seem like it unless you realize that at LW the tradition is to say that utility is linear in paperclips to Clippy. That may be our fault, but there’s your explanation. “Winning 60-0”, to us using our jargon, is equivalent to one paperclip, not 60. And “winning 33-31″ is also equivalent to one paperclip, not 33. (or they’re both equivalent to x paperclips, whatever)
So when I read your example, I read it as “80% chance of 1 paperclip, or 90% chance of 1 paperclip”.
I’m sure it’s very irritating to have your statement miscommunicated because of a jargon difference (paperclip = utility rather than f(paperclip) = utility)! I encourage you to post anyway, and begin with the assumption that we misunderstand you rather than the assumption that we are “fake rationalists”, but realize that in the current environment (unfortunately or not, but there it is) the burden of communication is on the poster.
While most of this of this seems sensible, I don’t understand how your last sentence follows. I have heard similar strategies suggested to reduce the probability of paperclipping, but it seems like if we actually succeed in producing a true friendly AI, the quantity it tries to maximize (expected winning, P(winning), or something else) will depend on how we evaluate outcomes.
This made some sense to me, at least to the point where I’d expect an intelligent refutation from disagreers, and seems posted in good faith. What am I missing about the voting system? Or about this post.