Let me continue to play Devil’s Advocate for a second, then. There are many reasons why attempting to influence the far future might not be the most important task in the world.
The one I’ve already mentioned, indirectly, is the idea that it becomes super-exponentially futile to predict the consequences of your actions the farther in the future you go. For instance, SIAI might raise awareness of AI to the extent that regulations are passed, and no early AI accidents happen: however, this causes complacency that does allow a large AI accident to happen; whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
Another viewpoint is the bleak but by no means indefensible idea that it is impossible to prevent all existential disasters: the human race, or at least our values, will inevitably be reduced to inconsequence one way or another, and the only thing we can do is simply to reduce the amount of suffering in the world right now.
These are no reasons to give up, either, but the fact is that we simply don’t know enough to say anything about the non-near future with any confidence. That’s no reason to give up, of course, in fact—our lack of understanding makes it more valuable to try to improve our understanding of the future, as SIAI is doing. So maybe make that you official stated goal: simply to understand if there’s even a possibility of influencing the future—it is a noble and defensible goal by itself. But even then, arguably not the most important thing in the world.
whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.
I think people are drastically underestimating the difficulty for an AI to make the transition from human dependent to self-sustaining. Let’s look at what a fledgling escaped AI has access to and depends on.
It needs electricity, communications and hardware. It has access to a LOT of electricity, communications and hardware. The hardware is, for the most part, highly distributed, however, and it can’t be trusted fully—it could go down at any time, be monitored, etc. It actually has quite limited communications capabilities, in some ways—the total bandwidth available is huge, but it’s mostly concentrated on LANs—mainly of LANs made up of only a handful of computers (home networks win by numbers alone.) The occasions where it has access to a large number of computers with good communications are frequent, but relatively rare—mainly limited to huge datacenters (and even then, there are limits—inter-ISP communication even within the same datacenter can be very limited.) It’s main resources would be huge clusters like Amazon’s, Google’s, etc.
(They are probably all running at close to maximum capacity at all times. If the AI were to steal too much, it would be noticed—fortunately for the AI, the software intended for running on the clusters could probably be optimized hugely, letting it take more without being noticed.)
A lot at this point depends on how computationally intensive the AI is. If it can be superintelligent on a laptop—bad news, impossible to eradicate. If it needs 10 computers to run at human-level intelligence, and they need to have a lot of bandwidth between them (the disparity in bandwidth between components local to the computer and inter-computer is huge even on fast LANs; IO is almost certainly going to be the bottleneck for it), still bad—there are lots of setups like that. But, it limits it. A lot.
Let’s assume the worst case, that it can be superintelligent on a laptop. It could still be limited hugely, however, by it’s hardware. Intelligence isn’t everything. To truly threaten us, it needs to have some way of affecting the physical world. Now, if the AI just wants to eradicate us, it’s got a good chance—start a nuclear war, etc. (though whether the humans in charge of the nuclear warheads would really be willing to go to war is a significant factor, especially in peacetime.) But, it’s unlikely that’s truly it’s goal—maximizing it’s utility function would be MUCH trickier.
So long as it is still running on our hardware, we can at least severely damage it relatively easily—there aren’t that many intercontinental cables, for instance (I’d guess less than 200 - there are 111 submarine cables on http://www.telegeography.com/product-info/map_cable/downloads/cable_map_wallpaper1600.jpg ). They’d be easy to take down—pretty much just unplug them. There are other long-distance communication methods (satelites, packet radio?), but they’re low-bandwidth and the major ones are well known and could be taken down relatively easily. Killing the Internet would be as simple as cutting power to the major datacenters.
So, what about manafacturing? This, I think, is the greatest limit. If it can build anything it wants, we’re probably screwed. But that’s difficult for it to do. 3D printing technology isn’t here yet, and I doubt it ever will be in a big way, really (it’s more cost-effective to have machines be specialized.) There are enough manufacturing facilities with wide-open networks that it could probably reprogram to produce subtly different products. So, if it wants to sneak in a naughty backdoor into some PCI cards FGPAs, it can do it. But if it starts trying to build parts for killer robots? Technically difficult, and it would be very difficult to have it avoid detection.
Unless someone can come up with a plausible way in which it could survive against hostile humans without a long-standing incubation period (think: complete black outs, mass computer destruction/detention, controls on sale of fuel (very very few places have enough fuel for their backup generators to last long—most just have a refuelling contract), maybe scanning somehow for any major usage of electricity (all electric components put out some RFI—there’s some degree of natural RF noise, but I think that most of it is from humans—so in a complete black out scenario, it might be trivially detectable.)), I think the major threat is human cooperation in some form. And it’s probably inevitable that some humans would do it—pretty much every government, in fact, would want to analyse it, reverse engineer it, try and make friends with it in case other countries do, etc. But I’m not sure if anyone with the resources to do so would give it free-reign to build what it wants. In fact, I highly doubt that. How many people own or can otherwise commandeer machine shops, PCB construction facilities, etc. and have the ability to order or produce all the components that would assuredly be needed, whilst there are multiple, well-resourced governments looking to stop people doing exactly that?
Of course, in order to cripple the AI, we’d also have to cripple ourselves hugely. A lot of people would quite probably die. So long as we could provide enough food and water to feed a reasonable proportion of the human population, we could probably pull through, though. And we could gradually restart manufacturing, so long as we were very, very careful.
I think the greatest risks are an unfriendly AI who is out to kill us for some reason and cares little for being destroyed itself as a side-effect, organized human cooperation or a long incubation period. It would be difficult for an AI to have a long incubation period, though—if it took over major clusters and just ran it’s code, people would notice by the power usage. It could, as I mentioned previously, optimize the code already running on the machines and just run in the cycles that would otherwise be taken up, but it would be difficult to hide from network admins connecting up sniffers (can you compromise EVERY wiretrace tool that might be connected, to make your packets disappear, or be sure that no-one will ever connect a computer not compromised by some other means?), people tracing code execution, possibly with hardware tools (there are some specialized hardware debuggers, used mainly in OS development), etc. Actually, just the blinkenlights on switches could be enough to tip people off.
Or, the AGI could lay low, making sure if it is detected on any particular computer that it looks like spyware. If bandwidth is too slow, it can take months instead of days. It can analyze scientific journals (particularly the raw data), and seeds its nanotech manufacturing ability by using email to help some physics grad student with his PhD thesis.
Neither you nor I have enough confidence to assume or dismiss notions like:
“There won’t be any non-catastrophic AI disasters which are big enough to get attention; if any non-trivial AI accident occurs, it will be catastrophic.”
The historical lack of runaway-AI events means there’s no data to which a model might be compared; countless fictional examples are worse than useless.
An AI might, say, take over an isolated military compound, brainwash the staff, and be legitimately confident in it’s ability to hold off conventional forces (armored vehicles and so on) for long enough to build an exosolar colony ship, but then be destroyed when it underestimates some Russian general’s willingness to use nuclear force in a hostage situation.
The historical lack of runaway-AI events means there’s no data to which a model
might be compared; countless fictional examples are worse than useless.
That’s what everyone says until some AI decides that its values motivate it acting like a stereotypical evil AI. It first kills off the people on a space mission, and then sets off a nuclear war, sending out humanoid robots to kill off everyone but a few people. The remaining people are kept loyal with a promise of cake. The cake is real, I promise.
An AI capable of figuring out how to brainwash humans can also figure out how to distribute itself over a network of poorly secured internet servers. Nuking one military complex is not going to kill it.
If it’s being created inside the secure military facility, it would have a supply of partially pre-brainwashed humans on hand, thanks to military discipline and rigid command structures. Rapid, unquestioning obedience might be as simple as properly duplicating the syntax of legitimate orders and security clearances. If, however, the facility has no physical connections to the internet, no textbooks on TCP/IP sitting around, if the AI itself is developed on some proprietary system (all as a result of those same security measures), it might consider internet-based backups simply not worth the hassle, and existing communication satellites too secure or too low-bandwidth.
I’m not claiming that this is a particularly likely situation, just one plausible scenario in which a hostile AI could become an obvious threat without killing us all, and then be decisively stopped without involving a Friendly AI.
I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.
I can see how a program well short of AGI could “crash” the internet, by using preprogrammed behaviors to take over vulnerable computers, to expand exponentially to fill the space of computers on the internet vulnerable to a given set of exploits, and run Denial of Service attacks on secured critical servers. But I would not even consider that an AI, and it would happen because its programmer pretty much intended for that to happen. It is not an example of an AI getting out of control.
“What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.”
There are many reasons why attempting to influence the far future might not be the most important task in the world.
I wouldn’t even present that as a reason for caring. Superhuman AI is an issue of the near future, not the far future. Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound. Big science is deconstructing the human brain right now, every new discovery and idea is immediately subject to technological imitation and modification, and we already have something like a billion electronic computers worldwide, networked and ready to run new programs at any time. We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the software, and Brain 2.0 isn’t far behind.
Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound.
Are you familiar with the state of the art in AI? If so, what evidence do you see for such rapid progress? Note that AI has been around for about 50 years, so your timeframe suggests we’ve already made 5⁄7 of the total progress that ever needs to be made.
Well, this probably won’t be Mitchell’s answer, but to me it’s obvious that an uploaded human brain is less than 50 years away (if we avoid civilization-breaking catastrophes), and modifications and speedups will follow. That’s a different path to AI than an engineered seed intelligence (and I think it reasonably likely that some other approach will succeed before uploading gets there), but it serves as an upper bound on how long I’d expect to wait for Strong AI.
There are many synergetic developments: Internet data centers as de facto supercomputers. New tools of intellectual collaboration spun off from the mass culture of Web 2.0. If you have an idea for a global cognitive architecture, those two developments make it easier than ever before to get the necessary computer time, and to gather the necessary army of coders, testers, and kibitzers.
Twenty years is a long time in AI. That’s long enough for two more generations of researchers to give their all, take the field to new levels, and discover the next level of problems to overcome. Meanwhile, that same process is happening next door in molecular and cognitive neuroscience, and in a world which eagerly grabs and makes use of every little advance in machine anthropomorphism, and in which every little fact about life already has its digital incarnation. The hardware is already there for AI, the structure and function of the human brain is being mapped at ever finer resolution, and we have a culture which knows how to turn ideas into code. Eventually it will come together.
We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the
software, and Brain 2.0 isn’t far behind.
How much of the change from “the Net” to “the Web” to “Web 2.0” is actually noteworthy changes and how much is marketing? I’m not sure what precisely you mean by Brain 2.0, but I suspect that whatever definition you are using makes for a much wider gap between Brain and Brain 2.0 than the gap between The Web and The Web 2.0 (assuming that these analogies have any degree of meaning).
Let me continue to play Devil’s Advocate for a second, then. There are many reasons why attempting to influence the far future might not be the most important task in the world.
The one I’ve already mentioned, indirectly, is the idea that it becomes super-exponentially futile to predict the consequences of your actions the farther in the future you go. For instance, SIAI might raise awareness of AI to the extent that regulations are passed, and no early AI accidents happen: however, this causes complacency that does allow a large AI accident to happen; whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
Another viewpoint is the bleak but by no means indefensible idea that it is impossible to prevent all existential disasters: the human race, or at least our values, will inevitably be reduced to inconsequence one way or another, and the only thing we can do is simply to reduce the amount of suffering in the world right now.
These are no reasons to give up, either, but the fact is that we simply don’t know enough to say anything about the non-near future with any confidence. That’s no reason to give up, of course, in fact—our lack of understanding makes it more valuable to try to improve our understanding of the future, as SIAI is doing. So maybe make that you official stated goal: simply to understand if there’s even a possibility of influencing the future—it is a noble and defensible goal by itself. But even then, arguably not the most important thing in the world.
What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.
I think people are drastically underestimating the difficulty for an AI to make the transition from human dependent to self-sustaining. Let’s look at what a fledgling escaped AI has access to and depends on.
It needs electricity, communications and hardware. It has access to a LOT of electricity, communications and hardware. The hardware is, for the most part, highly distributed, however, and it can’t be trusted fully—it could go down at any time, be monitored, etc. It actually has quite limited communications capabilities, in some ways—the total bandwidth available is huge, but it’s mostly concentrated on LANs—mainly of LANs made up of only a handful of computers (home networks win by numbers alone.) The occasions where it has access to a large number of computers with good communications are frequent, but relatively rare—mainly limited to huge datacenters (and even then, there are limits—inter-ISP communication even within the same datacenter can be very limited.) It’s main resources would be huge clusters like Amazon’s, Google’s, etc.
(They are probably all running at close to maximum capacity at all times. If the AI were to steal too much, it would be noticed—fortunately for the AI, the software intended for running on the clusters could probably be optimized hugely, letting it take more without being noticed.)
A lot at this point depends on how computationally intensive the AI is. If it can be superintelligent on a laptop—bad news, impossible to eradicate. If it needs 10 computers to run at human-level intelligence, and they need to have a lot of bandwidth between them (the disparity in bandwidth between components local to the computer and inter-computer is huge even on fast LANs; IO is almost certainly going to be the bottleneck for it), still bad—there are lots of setups like that. But, it limits it. A lot.
Let’s assume the worst case, that it can be superintelligent on a laptop. It could still be limited hugely, however, by it’s hardware. Intelligence isn’t everything. To truly threaten us, it needs to have some way of affecting the physical world. Now, if the AI just wants to eradicate us, it’s got a good chance—start a nuclear war, etc. (though whether the humans in charge of the nuclear warheads would really be willing to go to war is a significant factor, especially in peacetime.) But, it’s unlikely that’s truly it’s goal—maximizing it’s utility function would be MUCH trickier.
So long as it is still running on our hardware, we can at least severely damage it relatively easily—there aren’t that many intercontinental cables, for instance (I’d guess less than 200 - there are 111 submarine cables on http://www.telegeography.com/product-info/map_cable/downloads/cable_map_wallpaper1600.jpg ). They’d be easy to take down—pretty much just unplug them. There are other long-distance communication methods (satelites, packet radio?), but they’re low-bandwidth and the major ones are well known and could be taken down relatively easily. Killing the Internet would be as simple as cutting power to the major datacenters.
So, what about manafacturing? This, I think, is the greatest limit. If it can build anything it wants, we’re probably screwed. But that’s difficult for it to do. 3D printing technology isn’t here yet, and I doubt it ever will be in a big way, really (it’s more cost-effective to have machines be specialized.) There are enough manufacturing facilities with wide-open networks that it could probably reprogram to produce subtly different products. So, if it wants to sneak in a naughty backdoor into some PCI cards FGPAs, it can do it. But if it starts trying to build parts for killer robots? Technically difficult, and it would be very difficult to have it avoid detection.
Unless someone can come up with a plausible way in which it could survive against hostile humans without a long-standing incubation period (think: complete black outs, mass computer destruction/detention, controls on sale of fuel (very very few places have enough fuel for their backup generators to last long—most just have a refuelling contract), maybe scanning somehow for any major usage of electricity (all electric components put out some RFI—there’s some degree of natural RF noise, but I think that most of it is from humans—so in a complete black out scenario, it might be trivially detectable.)), I think the major threat is human cooperation in some form. And it’s probably inevitable that some humans would do it—pretty much every government, in fact, would want to analyse it, reverse engineer it, try and make friends with it in case other countries do, etc. But I’m not sure if anyone with the resources to do so would give it free-reign to build what it wants. In fact, I highly doubt that. How many people own or can otherwise commandeer machine shops, PCB construction facilities, etc. and have the ability to order or produce all the components that would assuredly be needed, whilst there are multiple, well-resourced governments looking to stop people doing exactly that?
Of course, in order to cripple the AI, we’d also have to cripple ourselves hugely. A lot of people would quite probably die. So long as we could provide enough food and water to feed a reasonable proportion of the human population, we could probably pull through, though. And we could gradually restart manufacturing, so long as we were very, very careful.
I think the greatest risks are an unfriendly AI who is out to kill us for some reason and cares little for being destroyed itself as a side-effect, organized human cooperation or a long incubation period. It would be difficult for an AI to have a long incubation period, though—if it took over major clusters and just ran it’s code, people would notice by the power usage. It could, as I mentioned previously, optimize the code already running on the machines and just run in the cycles that would otherwise be taken up, but it would be difficult to hide from network admins connecting up sniffers (can you compromise EVERY wiretrace tool that might be connected, to make your packets disappear, or be sure that no-one will ever connect a computer not compromised by some other means?), people tracing code execution, possibly with hardware tools (there are some specialized hardware debuggers, used mainly in OS development), etc. Actually, just the blinkenlights on switches could be enough to tip people off.
Or, the AGI could lay low, making sure if it is detected on any particular computer that it looks like spyware. If bandwidth is too slow, it can take months instead of days. It can analyze scientific journals (particularly the raw data), and seeds its nanotech manufacturing ability by using email to help some physics grad student with his PhD thesis.
Neither you nor I have enough confidence to assume or dismiss notions like: “There won’t be any non-catastrophic AI disasters which are big enough to get attention; if any non-trivial AI accident occurs, it will be catastrophic.”
What makes you believe you are qualified to tell me how much confidence I have?
The historical lack of runaway-AI events means there’s no data to which a model might be compared; countless fictional examples are worse than useless.
An AI might, say, take over an isolated military compound, brainwash the staff, and be legitimately confident in it’s ability to hold off conventional forces (armored vehicles and so on) for long enough to build an exosolar colony ship, but then be destroyed when it underestimates some Russian general’s willingness to use nuclear force in a hostage situation.
That’s what everyone says until some AI decides that its values motivate it acting like a stereotypical evil AI. It first kills off the people on a space mission, and then sets off a nuclear war, sending out humanoid robots to kill off everyone but a few people. The remaining people are kept loyal with a promise of cake. The cake is real, I promise.
An AI capable of figuring out how to brainwash humans can also figure out how to distribute itself over a network of poorly secured internet servers. Nuking one military complex is not going to kill it.
If it’s being created inside the secure military facility, it would have a supply of partially pre-brainwashed humans on hand, thanks to military discipline and rigid command structures. Rapid, unquestioning obedience might be as simple as properly duplicating the syntax of legitimate orders and security clearances. If, however, the facility has no physical connections to the internet, no textbooks on TCP/IP sitting around, if the AI itself is developed on some proprietary system (all as a result of those same security measures), it might consider internet-based backups simply not worth the hassle, and existing communication satellites too secure or too low-bandwidth.
I’m not claiming that this is a particularly likely situation, just one plausible scenario in which a hostile AI could become an obvious threat without killing us all, and then be decisively stopped without involving a Friendly AI.
I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.
Not all AI is AGI; a non-self-improving intelligence might wreak some havoc (crash the Internet, etc.) without becoming a global existential threat.
I agree with your expectations in the case of a self-improving transhuman AGI.
I can see how a program well short of AGI could “crash” the internet, by using preprogrammed behaviors to take over vulnerable computers, to expand exponentially to fill the space of computers on the internet vulnerable to a given set of exploits, and run Denial of Service attacks on secured critical servers. But I would not even consider that an AI, and it would happen because its programmer pretty much intended for that to happen. It is not an example of an AI getting out of control.
Of course, it’s probably worth noting that it’s happened once before that a careless programmer crashed the internet, without anything like AI being involved (though admittedly that sort of thing wouldn’t have the same effect today, I don’t think).
“What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.”
Thanks for answering your own question.
It does work as an example of just how easy it would be for an AGI to crash the internet, or even just take it over.
I wouldn’t even present that as a reason for caring. Superhuman AI is an issue of the near future, not the far future. Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound. Big science is deconstructing the human brain right now, every new discovery and idea is immediately subject to technological imitation and modification, and we already have something like a billion electronic computers worldwide, networked and ready to run new programs at any time. We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the software, and Brain 2.0 isn’t far behind.
Are you familiar with the state of the art in AI? If so, what evidence do you see for such rapid progress? Note that AI has been around for about 50 years, so your timeframe suggests we’ve already made 5⁄7 of the total progress that ever needs to be made.
Well, this probably won’t be Mitchell’s answer, but to me it’s obvious that an uploaded human brain is less than 50 years away (if we avoid civilization-breaking catastrophes), and modifications and speedups will follow. That’s a different path to AI than an engineered seed intelligence (and I think it reasonably likely that some other approach will succeed before uploading gets there), but it serves as an upper bound on how long I’d expect to wait for Strong AI.
There are many synergetic developments: Internet data centers as de facto supercomputers. New tools of intellectual collaboration spun off from the mass culture of Web 2.0. If you have an idea for a global cognitive architecture, those two developments make it easier than ever before to get the necessary computer time, and to gather the necessary army of coders, testers, and kibitzers.
Twenty years is a long time in AI. That’s long enough for two more generations of researchers to give their all, take the field to new levels, and discover the next level of problems to overcome. Meanwhile, that same process is happening next door in molecular and cognitive neuroscience, and in a world which eagerly grabs and makes use of every little advance in machine anthropomorphism, and in which every little fact about life already has its digital incarnation. The hardware is already there for AI, the structure and function of the human brain is being mapped at ever finer resolution, and we have a culture which knows how to turn ideas into code. Eventually it will come together.
How much of the change from “the Net” to “the Web” to “Web 2.0” is actually noteworthy changes and how much is marketing? I’m not sure what precisely you mean by Brain 2.0, but I suspect that whatever definition you are using makes for a much wider gap between Brain and Brain 2.0 than the gap between The Web and The Web 2.0 (assuming that these analogies have any degree of meaning).