...your time may be better spent trying to convince Ben Goertzel that there’s a problem, since at least he’s an immediate threat. ;)
I doubt it. I neither believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.
The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justified even in principle to be confident in that prediction.
In short, either their work is incrementally useful or is based on wild speculations about the possible discovery of unknown unknowns.
The real risks in my opinion are 1) that together they make many independent discoveries and someone builds something out of it 2) that a huge company like IBM, or a military project, builds something 3) the abstract possibility that some partly related field like neuroscience, or an unrelated field, provides the necessary insight to put two and two together.
I reject the notion that one can factorize intelligence from goals, so that one could take a superintelligence and fuse it with a goal to optimize for paperclips.
Do you mean that intelligence is fundamentally interwoven with complex goals?
...never completely turn its available resources into paperclips since that would mean no chance of more paperclips in the future;
Do you mean that there is no point at which exploitation is favored over exploration?
I’m not partisan enough to prioritize human values over the Darwinian imperative.
I am not sure what you mean, could you elaborate? Do you mean something along the lines of what Ben Goertzel says in the following quote:
But my gut reaction is: I’d choose humanity. As I type these words, the youngest of my three kids, my 13 year old daughter Scheherazade, is sitting a few feet away from me doing her geometry homework and listening to Scriabin Op. Fantasy 28 on her new MacBook Air that my parents got her for Hanukah. I’m not going to will her death to create a superhuman artilect. Gut feeling: I’d probably sacrifice myself to create a superhuman artilect, but not my kids…. I do have huge ambitions and interests going way beyond the human race – but I’m still a human.
You further wrote:
In summary, I’m just not worried about AI risk
What is your best guess at why people associated with SI are worried about AI risk?
I’ve heard many (probably most) AI-risk arguments, and failed to become worried...
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it? Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Do you mean that intelligence is fundamentally interwoven with complex goals?
Essentially, yes. I think that defining an arbitrary entity’s “goals” is not obviously possible, unless one simply accepts the trivial definition of “its goals are whatever it winds up causing”; I think intelligence is fundamentally interwoven with causing complex effects.
Do you mean that there is no point at which exploitation is favored over exploration?
I mean that there is no point at which exploitation is favored exclusively over exploration.
Do you mean.… “Gut feeling: I’d probably sacrifice myself to create a superhuman artilect, but not my kids….”
I’m 20 years old—I don’t have any kids yet. If I did, I might very well feel differently. What I do mean is that I believe it to be culturally pretentious, and even morally wrong (according to my personal system of morals), to assert that it is better to hold back technological progress if necessary to preserve the human status quo, rather than allow ourselves to evolve into and ultimately be replaced by a superior civilization. I have the utmost faith in Nature to ensure that eventually, everything keeps getting better on average, even if there are occasional dips due to, e.g., wars; but if we can make the transition to a machine civilization smooth and gradual, I hope there won’t even have to be a war (a la Hugo de Garis).
What is your best guess at why people associated with SI are worried about AI risk?
Well, the trivial response is to say “that’s why they’re associated with SI.” But I assume that’s not how you meant the question. There are a number of reasons to become worried about AI risk. We see AI disasters in science fiction all the time. Eliezer makes pretty good arguments for AI disasters. People observe that a lot of smart folks are worried about AI risk, and it seems to be part of the correct contrarian cluster. But most of all, I think it is a combination of fear of the unknown and implicit beliefs about the meaning and value of the concept “human”.
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it?
In my opinion, the strongest argument in favor of AI-risk is the existence of highly intelligent but highly deranged individuals, such as the Unabomber. If mental illness is a natural attractor in mind-space, we might be in trouble.
Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Naturally. I was somewhat worried about AI-risk before I started studying and thinking about intelligence in depth. It is entirely possible that my feelings about AI-risk will follow a Wundt curve, and that once I learn even more about the nature of intelligence, I will realize we are all doomed for one reason or another. Needless to say, I don’t expect this, but you never know what you might not know.
I have the utmost faith in Nature to ensure that eventually, everything keeps getting better on average
The laws of physics don’t care. What process do you think explains the fact that you have this belief? If the truth of a belief isn’t what causes you to have it, having that belief is not evidence for its truth.
I’m afraid it was no mistake that I used the word “faith”!
This belief does not appear to conflict with the truth (or at least that’s a separate debate) but it is also difficult to find truthful support for it. Sure, I can wave my hands about complexity and entropy and how information can’t be destroyed but only created, but I’ll totally admit that this does not logically translate into “life will be good in the future.”
The best argument I can give goes as follows. For the sake of discussion, at least, let’s assume MWI. Then there is some population of alternate futures. Now let’s assume that the only stable equilibria are entirely valueless state ensembles such as the heat death of the universe. With me so far? OK, now here’s the first big leap: let’s say that our quantification of value, from state ensembles to the nonnegative reals, can be approximated by a continuous function. Therefore, by application of Conley’s theorem, the value trajectories of alternate futures fall into one of two categories: those which asymptotically approach 0, and those which asymptotically approach infinity. The second big leap involves disregarding those alternate futures which approach zero. Not only will you and I die in those futures, but we won’t even be remembered; none of our actions or words will be observed beyond a finite time horizon along those trajectories. So I conclude that I should behave as if the only trajectories are those which asymptotically approach infinity.
It seems to me like you assume that you have no agency in pushing the value trajectories of alternate futures towards infinity rather than zero, and I don’t see why.
I doubt it. I neither believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.
The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justified even in principle to be confident in that prediction.
In short, either their work is incrementally useful or is based on wild speculations about the possible discovery of unknown unknowns.
The real risks in my opinion are 1) that together they make many independent discoveries and someone builds something out of it 2) that a huge company like IBM, or a military project, builds something 3) the abstract possibility that some partly related field like neuroscience, or an unrelated field, provides the necessary insight to put two and two together.
Do you mean that intelligence is fundamentally interwoven with complex goals?
Do you mean that there is no point at which exploitation is favored over exploration?
I am not sure what you mean, could you elaborate? Do you mean something along the lines of what Ben Goertzel says in the following quote:
You further wrote:
What is your best guess at why people associated with SI are worried about AI risk?
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it? Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Essentially, yes. I think that defining an arbitrary entity’s “goals” is not obviously possible, unless one simply accepts the trivial definition of “its goals are whatever it winds up causing”; I think intelligence is fundamentally interwoven with causing complex effects.
I mean that there is no point at which exploitation is favored exclusively over exploration.
I’m 20 years old—I don’t have any kids yet. If I did, I might very well feel differently. What I do mean is that I believe it to be culturally pretentious, and even morally wrong (according to my personal system of morals), to assert that it is better to hold back technological progress if necessary to preserve the human status quo, rather than allow ourselves to evolve into and ultimately be replaced by a superior civilization. I have the utmost faith in Nature to ensure that eventually, everything keeps getting better on average, even if there are occasional dips due to, e.g., wars; but if we can make the transition to a machine civilization smooth and gradual, I hope there won’t even have to be a war (a la Hugo de Garis).
Well, the trivial response is to say “that’s why they’re associated with SI.” But I assume that’s not how you meant the question. There are a number of reasons to become worried about AI risk. We see AI disasters in science fiction all the time. Eliezer makes pretty good arguments for AI disasters. People observe that a lot of smart folks are worried about AI risk, and it seems to be part of the correct contrarian cluster. But most of all, I think it is a combination of fear of the unknown and implicit beliefs about the meaning and value of the concept “human”.
In my opinion, the strongest argument in favor of AI-risk is the existence of highly intelligent but highly deranged individuals, such as the Unabomber. If mental illness is a natural attractor in mind-space, we might be in trouble.
Naturally. I was somewhat worried about AI-risk before I started studying and thinking about intelligence in depth. It is entirely possible that my feelings about AI-risk will follow a Wundt curve, and that once I learn even more about the nature of intelligence, I will realize we are all doomed for one reason or another. Needless to say, I don’t expect this, but you never know what you might not know.
The laws of physics don’t care. What process do you think explains the fact that you have this belief? If the truth of a belief isn’t what causes you to have it, having that belief is not evidence for its truth.
I’m afraid it was no mistake that I used the word “faith”!
This belief does not appear to conflict with the truth (or at least that’s a separate debate) but it is also difficult to find truthful support for it. Sure, I can wave my hands about complexity and entropy and how information can’t be destroyed but only created, but I’ll totally admit that this does not logically translate into “life will be good in the future.”
The best argument I can give goes as follows. For the sake of discussion, at least, let’s assume MWI. Then there is some population of alternate futures. Now let’s assume that the only stable equilibria are entirely valueless state ensembles such as the heat death of the universe. With me so far? OK, now here’s the first big leap: let’s say that our quantification of value, from state ensembles to the nonnegative reals, can be approximated by a continuous function. Therefore, by application of Conley’s theorem, the value trajectories of alternate futures fall into one of two categories: those which asymptotically approach 0, and those which asymptotically approach infinity. The second big leap involves disregarding those alternate futures which approach zero. Not only will you and I die in those futures, but we won’t even be remembered; none of our actions or words will be observed beyond a finite time horizon along those trajectories. So I conclude that I should behave as if the only trajectories are those which asymptotically approach infinity.
It seems to me like you assume that you have no agency in pushing the value trajectories of alternate futures towards infinity rather than zero, and I don’t see why.
Is this a variant of quantum suicide, with “suicide” part replaced by “dead and forgotten in long run, whatever the cause”?