You don’t seem to want to state your beliefs clearly, and I don’t have the patience to write more than this one post encouraging you to do so.
Do you believe that the difficulty of developing technology depends on the mind trying to develop it only when that mind happens to be human? Or that nothing can be smarter than the smartest human? Or what?
I’m being as clear as I can without writing an essay in every comment. But I’ll put it this way:
Nothing is currently smarter than the smartest human.
This is not going to change anytime soon.
While AI does have the potential to produce better tools than we have now, there are still going to be enormous gaps in the abilities of those tools. For example, suppose you had an AI that was great at writing code from formal specifications, but didn’t know enough about the real world to know what code to write. Then you would have a tool that you might find useful, but that you could not sit back and let solve your problems for you. At the end of the day, the responsibility for solving your problems would still be yours. This is very different from the Singularitarian vision where creating a superintelligent AI is the last job we need to do.
Maybe you could write a full post about your views. I’d very much like to read good criticism of singularitarism, but so far your objections aren’t very strong.
The core assumptions in this comment, for example, seem to be not really visible. I’m guessing the idea is something like it’d be really, really hard to do an AI that can do everything a human does, and trying to leave real-world problem-solving to subhuman AIs won’t work.
But no-one’s talking about going after problems in the physical world with a glorified optimizing compiler, so why do you bring up this as the main example? The starting for a lot of current AGI thinking, as far as I’ve understood, is to make an AI with the ability to learn and some means to interact with the world. This AI is then expected to learn to act in the world like humans learn when they grow from newborns to adults.
So is there some kind of basic difference in understanding here, when I’m thinking of AIs as learning semi-autonomous agents, and you’re thinking them as, I guess, some kind of pre-programmed unchanging procedures for doing specific things?
Yes, basically my claim is that an AI of the sort you’re talking about is a job for the world over timescales of generations, not for a single team over timescales of years or decades; it’s hard to prove a negative, and you are right that the comments I’ve been making here don’t—can’t—strongly justify that claim. I’ll think about whether I can put together my reasoning into a full post.
Your position is one that most people assign some probability mass to. However, I get the impression that you’re extremely (over)confident in it. So I look forward to hearing your case.
Ok, thanks. As far as I see, this is the most important core objection then.
There’s actually a second big unknown too before getting into full singularitarism, whether this kind of human-equivalent AI could easily boost itself to strongly superhuman levels with any sort of ease.
But the question of just how difficult it is to build the learning baby AI is really important, and I don’t have any good ideas on how to estimate it except from stuff that can be figured out from biology. The human genome gives us the number of bits that keeps passing through evolution and the general initial complexity for humans, but it’s big enough that without a very good design sense trying to navigate that kind of design space would indeed take generations. Brains and learning have been evolving for a very long time, indicating that the machinery may be very elaborate to get right. Compared to this, symbolic language seems to have popped up very quickly in evolution, which gives reason to believe that once there’s a robust nonverbal cognitive architecture, adding symbolic cognition capabilities isn’t nearly as hard as getting the basic architecture together.
It might also be that the selective pressure in favor of increased intelligence increased suddenly, most likely as a result of competition among humans.
Once a singleton AI becomes marginally smarter than the smartest human, how are we to distinguish between further advances in intelligence as opposed to, say, an increase in it’s ability to impress us with high-tech parlor tricks? Would there be competition between AIs, and if so, over what?
but didn’t know enough about the real world to know what code to write.
This requires two things: knowing what you want, and learning about the world.
I don’t see the fundamental problem in getting an AI to learn about the world. The informal human epistemic process has been analyzed into components, and these have been formalized and implemented in ways far more powerful than an unaided human can manage. It’s a lot of work to put it all together in a self-consistent package, and to give it enough self-knowledge and world-knowledge to set it in motion, and it would require a lot of computing power. But I don’t see any fundamental difficulty.
What the AI wants is utterly contingent on initial conditions. But an AI that can represent the world and learn about it, can also represent just about any goal you care to give it, so there’s no extra problem to solve here. (Except for Friendliness. But that is the specific problem of identifying a desirable goal, not the general problem of implementing goal-directed behavior.)
Just reviewing this basic argument reinforces the prior impression that we are already drifting towards transhuman AI and that there’s no fundamental barrier in the way. We already know enough for hard work alone to get us there—I mean the hard work of tens of thousands of researchers in many fields, not one person or one group making a super-duper effort. The other factor which seals our fate is distributed computing. Even if Moore’s law breaks down, computers can be networked, and there are lots of computers.
So, we are going to face something smarter than human, which means something that can outwit us, which means something that should win if its goals are ever in conflict with ours. And there is no law of nature to guarantee that its goals will be humanly benevolent. On the contrary, it seems like anything might serve as the goal of an AI, just as “any” numerical expression might be fed to a calculator for evaluation.
What we don’t know is how likely it is that the first transhuman AI’s goals will be bad for us. A transhuman AI may require something like the resources of a large contemporary server farm to operate, in which case it’s not going to happen by accident. There is some possibility that the inherent difficulty of getting there renders it more likely that by the time you get to transhuman AI, the people working on it have thought ahead to the time when the AI is autonomous and in fact beyond stopping, and realized that it had better have “ethics”. But that just means that by the time that the discipline of AI is approaching the transhuman threshold, people are probably becoming aware of what we have come to call the problem of friendliness. It doesn’t mean that the problem is sure to have been solved by the time the point of no return is reached.
All in all, therefore, I conclude (1) the Singularity concept makes sense (2) it is a matter for concern in the present and near future, not the far future (3) figuring out the appropriate initial conditions for an ethical AI is the key problem to solve (4) SIAI is historically important as the first serious attempt to solve this problem.
You don’t seem to want to state your beliefs clearly, and I don’t have the patience to write more than this one post encouraging you to do so.
Do you believe that the difficulty of developing technology depends on the mind trying to develop it only when that mind happens to be human? Or that nothing can be smarter than the smartest human? Or what?
I’m being as clear as I can without writing an essay in every comment. But I’ll put it this way:
Nothing is currently smarter than the smartest human.
This is not going to change anytime soon.
While AI does have the potential to produce better tools than we have now, there are still going to be enormous gaps in the abilities of those tools. For example, suppose you had an AI that was great at writing code from formal specifications, but didn’t know enough about the real world to know what code to write. Then you would have a tool that you might find useful, but that you could not sit back and let solve your problems for you. At the end of the day, the responsibility for solving your problems would still be yours. This is very different from the Singularitarian vision where creating a superintelligent AI is the last job we need to do.
Maybe you could write a full post about your views. I’d very much like to read good criticism of singularitarism, but so far your objections aren’t very strong.
The core assumptions in this comment, for example, seem to be not really visible. I’m guessing the idea is something like it’d be really, really hard to do an AI that can do everything a human does, and trying to leave real-world problem-solving to subhuman AIs won’t work.
But no-one’s talking about going after problems in the physical world with a glorified optimizing compiler, so why do you bring up this as the main example? The starting for a lot of current AGI thinking, as far as I’ve understood, is to make an AI with the ability to learn and some means to interact with the world. This AI is then expected to learn to act in the world like humans learn when they grow from newborns to adults.
So is there some kind of basic difference in understanding here, when I’m thinking of AIs as learning semi-autonomous agents, and you’re thinking them as, I guess, some kind of pre-programmed unchanging procedures for doing specific things?
Yes, basically my claim is that an AI of the sort you’re talking about is a job for the world over timescales of generations, not for a single team over timescales of years or decades; it’s hard to prove a negative, and you are right that the comments I’ve been making here don’t—can’t—strongly justify that claim. I’ll think about whether I can put together my reasoning into a full post.
Your position is one that most people assign some probability mass to. However, I get the impression that you’re extremely (over)confident in it. So I look forward to hearing your case.
Ok, thanks. As far as I see, this is the most important core objection then.
There’s actually a second big unknown too before getting into full singularitarism, whether this kind of human-equivalent AI could easily boost itself to strongly superhuman levels with any sort of ease.
But the question of just how difficult it is to build the learning baby AI is really important, and I don’t have any good ideas on how to estimate it except from stuff that can be figured out from biology. The human genome gives us the number of bits that keeps passing through evolution and the general initial complexity for humans, but it’s big enough that without a very good design sense trying to navigate that kind of design space would indeed take generations. Brains and learning have been evolving for a very long time, indicating that the machinery may be very elaborate to get right. Compared to this, symbolic language seems to have popped up very quickly in evolution, which gives reason to believe that once there’s a robust nonverbal cognitive architecture, adding symbolic cognition capabilities isn’t nearly as hard as getting the basic architecture together.
It might also be that the selective pressure in favor of increased intelligence increased suddenly, most likely as a result of competition among humans.
Once a singleton AI becomes marginally smarter than the smartest human, how are we to distinguish between further advances in intelligence as opposed to, say, an increase in it’s ability to impress us with high-tech parlor tricks? Would there be competition between AIs, and if so, over what?
This requires two things: knowing what you want, and learning about the world.
I don’t see the fundamental problem in getting an AI to learn about the world. The informal human epistemic process has been analyzed into components, and these have been formalized and implemented in ways far more powerful than an unaided human can manage. It’s a lot of work to put it all together in a self-consistent package, and to give it enough self-knowledge and world-knowledge to set it in motion, and it would require a lot of computing power. But I don’t see any fundamental difficulty.
What the AI wants is utterly contingent on initial conditions. But an AI that can represent the world and learn about it, can also represent just about any goal you care to give it, so there’s no extra problem to solve here. (Except for Friendliness. But that is the specific problem of identifying a desirable goal, not the general problem of implementing goal-directed behavior.)
Just reviewing this basic argument reinforces the prior impression that we are already drifting towards transhuman AI and that there’s no fundamental barrier in the way. We already know enough for hard work alone to get us there—I mean the hard work of tens of thousands of researchers in many fields, not one person or one group making a super-duper effort. The other factor which seals our fate is distributed computing. Even if Moore’s law breaks down, computers can be networked, and there are lots of computers.
So, we are going to face something smarter than human, which means something that can outwit us, which means something that should win if its goals are ever in conflict with ours. And there is no law of nature to guarantee that its goals will be humanly benevolent. On the contrary, it seems like anything might serve as the goal of an AI, just as “any” numerical expression might be fed to a calculator for evaluation.
What we don’t know is how likely it is that the first transhuman AI’s goals will be bad for us. A transhuman AI may require something like the resources of a large contemporary server farm to operate, in which case it’s not going to happen by accident. There is some possibility that the inherent difficulty of getting there renders it more likely that by the time you get to transhuman AI, the people working on it have thought ahead to the time when the AI is autonomous and in fact beyond stopping, and realized that it had better have “ethics”. But that just means that by the time that the discipline of AI is approaching the transhuman threshold, people are probably becoming aware of what we have come to call the problem of friendliness. It doesn’t mean that the problem is sure to have been solved by the time the point of no return is reached.
All in all, therefore, I conclude (1) the Singularity concept makes sense (2) it is a matter for concern in the present and near future, not the far future (3) figuring out the appropriate initial conditions for an ethical AI is the key problem to solve (4) SIAI is historically important as the first serious attempt to solve this problem.