Well, we can agree that the default outcome is probably death.
So, in my previous comment, I explained why I tend to not think Complexity of Value necessarily dooms us. I doubt you find the aforementioned reasoning remotely reassuring, but I’d be interested in finding out why you think that it shouldn’t be. Would you be willing to try and explain that to me?
I never meant to claim that my position was “clever people don’t seem worried so I shouldn’t be”. If that’s what you got from me, then that’s my mistake. I’m incredibly worried as a matter of fact, and much more importantly, everyone I mentioned also is to some extent or another, as you already pointed out. What I meant to say but failed to was that there’s enough disagreement in these circles that near-absolute confidence in doom seems to be jumping the gun. That argument also very much holds against people who are so certain that everything will go just fine.
I guess most of my disagreement comes from 4. Or rather, the implication that having an exact formal specification of human values ready to be encoded is necessarily the only way that things could possibly go well. I already tried to verbalize as much earlier, but maybe I didn’t do a good job of that either.
I wouldn’t call my confidence in doom near-absolute, so much as “very high”! I would have been just as much a doomer in 1950, last time AI looked imminent, before it was realized that “the hard things are easy and the easy things are hard”.
I wouldn’t be that surprised if it turned out that we’re still a few fundamental discoveries away from AGI. My intuition is telling me that we’re not.
But the feeling that we might get away with it is only coming from a sense that I can easily be wrong about stuff. I would feel the same if I’d been transported back to 1600, made myself a telescope, and observed a comet heading for earth, but no-one would listen.
“Within my model”, as it were, yes, near-absolute is a fair description.
The long-term problem is that an agent is going to have a goal. And most goals kill us. We get to make exactly one wish, and that wish will come true whether we want it or not. Even if the world was sane, this would be a very very dangerous situation. I would want to see very strong mathematical proof that such a thing was safe before trying it, and I’d still expect it to kill everyone.
The short term problem is that we’re not even trying. People all over the place are actively building more and more general agents that make plans, with just any old goals, without apparently worrying about it, and they don’t believe there’s a problem.
What on earth do you think might stop the apocalypse? I can imagine something like “take over the world, destroy all computers” might work, but that doesn’t look feasible without superintelligent help, and that puts us in the situation where we have a rough idea what we want, but we still need to find out how to express that formally without it leading to the destruction of all things.
As a very wise man once said: “The only genie to which it is safe to make a wish is one to which you don’t need to make a wish, because it already knows what you want and it is on your side.”
Well, we can agree that the default outcome is probably death.
So, in my previous comment, I explained why I tend to not think Complexity of Value necessarily dooms us. I doubt you find the aforementioned reasoning remotely reassuring, but I’d be interested in finding out why you think that it shouldn’t be. Would you be willing to try and explain that to me?
Hi, so I don’t understand why you’re not worried except that “some clever people don’t seem worried”.
But actually I think all those guys are in fact quite worried. If they aren’t full on doomers then I don’t understand what they’re hoping to do.
So I’ll repeat my argument:
(1) We’re about to create a superintelligence. This is close and there’s no way to stop it.
(2) If we create a superintelligence, then whatever it wants is what is going to happen.
(3) If that’s not what we want, that’s very bad.
(4) We have no idea what we want, not even roughly, let alone in the sense of formal specification.
That’s pretty much it. Which bit do you disagree with?
I never meant to claim that my position was “clever people don’t seem worried so I shouldn’t be”. If that’s what you got from me, then that’s my mistake. I’m incredibly worried as a matter of fact, and much more importantly, everyone I mentioned also is to some extent or another, as you already pointed out. What I meant to say but failed to was that there’s enough disagreement in these circles that near-absolute confidence in doom seems to be jumping the gun. That argument also very much holds against people who are so certain that everything will go just fine.
I guess most of my disagreement comes from 4. Or rather, the implication that having an exact formal specification of human values ready to be encoded is necessarily the only way that things could possibly go well. I already tried to verbalize as much earlier, but maybe I didn’t do a good job of that either.
I wouldn’t call my confidence in doom near-absolute, so much as “very high”! I would have been just as much a doomer in 1950, last time AI looked imminent, before it was realized that “the hard things are easy and the easy things are hard”.
I wouldn’t be that surprised if it turned out that we’re still a few fundamental discoveries away from AGI. My intuition is telling me that we’re not.
But the feeling that we might get away with it is only coming from a sense that I can easily be wrong about stuff. I would feel the same if I’d been transported back to 1600, made myself a telescope, and observed a comet heading for earth, but no-one would listen.
“Within my model”, as it were, yes, near-absolute is a fair description.
The long-term problem is that an agent is going to have a goal. And most goals kill us. We get to make exactly one wish, and that wish will come true whether we want it or not. Even if the world was sane, this would be a very very dangerous situation. I would want to see very strong mathematical proof that such a thing was safe before trying it, and I’d still expect it to kill everyone.
The short term problem is that we’re not even trying. People all over the place are actively building more and more general agents that make plans, with just any old goals, without apparently worrying about it, and they don’t believe there’s a problem.
What on earth do you think might stop the apocalypse? I can imagine something like “take over the world, destroy all computers” might work, but that doesn’t look feasible without superintelligent help, and that puts us in the situation where we have a rough idea what we want, but we still need to find out how to express that formally without it leading to the destruction of all things.
As a very wise man once said: “The only genie to which it is safe to make a wish is one to which you don’t need to make a wish, because it already knows what you want and it is on your side.”