This post seems almost totally wrong to me. For one thing, its central claim—that without human values the future would, with high probability be dull is not even properly defined.
To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be “dull” (it might help to say a bit more about what dullness means too, but that is a side issue for now).
That claim might well be true for typical “shortest-first” enumerations in sensible languages—but it is not a very interesting claim—since the dull utility functions would be those which led to an attainable goal—such as “count up to 10 and then stop”.
The “open-ended” utilility functions—the ones that resulted in systems that would spread out—would almost inevitably lead to rich complexity. You can’t turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology—and so on. So, you need scientists and engineers—and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.
I’ve explained all this to Eleizer before. After reading this post I still have very little idea about what it is that he isn’t getting. He seems to think that making paper clips are boring. However, they are not any more boring than making DNA sequences, and that’s the current aim of most living systems.
A prime-seeking civilisation has a competitive disadvantage over one that doesn’t have silly, arbitrary bits tacked on to its utility function. It is more likely to be wiped out in a battle with an alien race—and it’s more likely to suffer from a mutiny from within. However, that is about all. They are unlikely to lack science, technology, or other interesting stuff.
ou can’t turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology—and so on. So, you need scientists and engineers—and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.
Yes, but there would be no persons. There would be no scientists, no joy of discovery, no feeling of curiosity. There would just be a “process” that, from the outside, would look like an avalanche of expanding machinery, and on the inside would have no subjective experience. It would contain a complex intelligence, but there would be no-one to marvel at the complex intelligence, not even itself, because there would be “no-one home” in all likelihood.
For me, what proved decisive in coming to a low estimate of the value of such a system was the realization that the reason that I liked science, technology, etc, was because of my subjective experiences of finding out the answer.
They are unlikely to lack science, technology, or other interesting stuff.
Interestingness is in the eye of the beholder, but this piece argues that the beholder would have no eye; that there would be an optimizing process that lacked the ability to experience joy over any of its discoveries.
While I think you may very plausibly be correct, there is (I think) some reasonable grounds for uncertainty. I can imagine that an advanced algorithm that performs the role of making scientific discoveries to aid in the development of technologies for the great paperclip fleet might indeed have “some one home”. It maybe that this is beneficial to its effectiveness, or might be close to essential.
I can’t make any strong claims about why this would be needed, only that human beings (at least me) do have “some one home”, but if we didn’t know about human beings are we were speculating on what organisms evolution might produce we might find ourselves postulating complex, social creatures who solve complicated tasks, but have “no one home”, and we would obviously be wrong.
So, you need scientists and engineers—and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.
But you don’t need very many, and you’re free to enslave them while they work then kill them once they’re done. They might not need to be conscious, and they certainly don’t need to enjoy their work.
Probably, they will just be minor subroutines of the original AI, deleted and replaced once they learn everything necessary, which won’t take long for a smart AI.
He seems to think that making paper clips are boring. However, they are not any more boring than making DNA sequences, and that’s the current aim of most living systems.
We don’t particularly value copying DNA sequences for its own sake either though. Imagine a future where an unthinking strain of bacteria functioned like grey goo and replicated itself using all matter in its light cone, and it was impervious to mutations. I wouldn’t rate that future as any more valuable than a future where all life went extinct. The goals of evolution aren’t necessarily our goals.
However, they are not any more boring than making DNA sequences, and that’s the current aim of most living systems.
Making a DNA sequence will count as (an extremely low level activity) [http://lesswrong.com/lw/xr/in_praise_of_boredom/] which is necessary to support non-boring activities. It is a very simple argument that these are the very activity we stop thinking about and concentrate on novel activities.
This post seems almost totally wrong to me. For one thing, its central claim—that without human values the future would, with high probability be dull is not even properly defined.
To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be “dull” (it might help to say a bit more about what dullness means too, but that is a side issue for now).
That claim might well be true for typical “shortest-first” enumerations in sensible languages—but it is not a very interesting claim—since the dull utility functions would be those which led to an attainable goal—such as “count up to 10 and then stop”.
The “open-ended” utilility functions—the ones that resulted in systems that would spread out—would almost inevitably lead to rich complexity. You can’t turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology—and so on. So, you need scientists and engineers—and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.
I’ve explained all this to Eleizer before. After reading this post I still have very little idea about what it is that he isn’t getting. He seems to think that making paper clips are boring. However, they are not any more boring than making DNA sequences, and that’s the current aim of most living systems.
A prime-seeking civilisation has a competitive disadvantage over one that doesn’t have silly, arbitrary bits tacked on to its utility function. It is more likely to be wiped out in a battle with an alien race—and it’s more likely to suffer from a mutiny from within. However, that is about all. They are unlikely to lack science, technology, or other interesting stuff.
Yes, but there would be no persons. There would be no scientists, no joy of discovery, no feeling of curiosity. There would just be a “process” that, from the outside, would look like an avalanche of expanding machinery, and on the inside would have no subjective experience. It would contain a complex intelligence, but there would be no-one to marvel at the complex intelligence, not even itself, because there would be “no-one home” in all likelihood.
For me, what proved decisive in coming to a low estimate of the value of such a system was the realization that the reason that I liked science, technology, etc, was because of my subjective experiences of finding out the answer.
Interestingness is in the eye of the beholder, but this piece argues that the beholder would have no eye; that there would be an optimizing process that lacked the ability to experience joy over any of its discoveries.
While I think you may very plausibly be correct, there is (I think) some reasonable grounds for uncertainty. I can imagine that an advanced algorithm that performs the role of making scientific discoveries to aid in the development of technologies for the great paperclip fleet might indeed have “some one home”. It maybe that this is beneficial to its effectiveness, or might be close to essential.
I can’t make any strong claims about why this would be needed, only that human beings (at least me) do have “some one home”, but if we didn’t know about human beings are we were speculating on what organisms evolution might produce we might find ourselves postulating complex, social creatures who solve complicated tasks, but have “no one home”, and we would obviously be wrong.
But you don’t need very many, and you’re free to enslave them while they work then kill them once they’re done. They might not need to be conscious, and they certainly don’t need to enjoy their work.
Probably, they will just be minor subroutines of the original AI, deleted and replaced once they learn everything necessary, which won’t take long for a smart AI.
We don’t particularly value copying DNA sequences for its own sake either though. Imagine a future where an unthinking strain of bacteria functioned like grey goo and replicated itself using all matter in its light cone, and it was impervious to mutations. I wouldn’t rate that future as any more valuable than a future where all life went extinct. The goals of evolution aren’t necessarily our goals.
Making a DNA sequence will count as (an extremely low level activity) [http://lesswrong.com/lw/xr/in_praise_of_boredom/] which is necessary to support non-boring activities. It is a very simple argument that these are the very activity we stop thinking about and concentrate on novel activities.