There are many reasons why attempting to influence the far future might not be the most important task in the world.
I wouldn’t even present that as a reason for caring. Superhuman AI is an issue of the near future, not the far future. Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound. Big science is deconstructing the human brain right now, every new discovery and idea is immediately subject to technological imitation and modification, and we already have something like a billion electronic computers worldwide, networked and ready to run new programs at any time. We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the software, and Brain 2.0 isn’t far behind.
Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound.
Are you familiar with the state of the art in AI? If so, what evidence do you see for such rapid progress? Note that AI has been around for about 50 years, so your timeframe suggests we’ve already made 5⁄7 of the total progress that ever needs to be made.
Well, this probably won’t be Mitchell’s answer, but to me it’s obvious that an uploaded human brain is less than 50 years away (if we avoid civilization-breaking catastrophes), and modifications and speedups will follow. That’s a different path to AI than an engineered seed intelligence (and I think it reasonably likely that some other approach will succeed before uploading gets there), but it serves as an upper bound on how long I’d expect to wait for Strong AI.
There are many synergetic developments: Internet data centers as de facto supercomputers. New tools of intellectual collaboration spun off from the mass culture of Web 2.0. If you have an idea for a global cognitive architecture, those two developments make it easier than ever before to get the necessary computer time, and to gather the necessary army of coders, testers, and kibitzers.
Twenty years is a long time in AI. That’s long enough for two more generations of researchers to give their all, take the field to new levels, and discover the next level of problems to overcome. Meanwhile, that same process is happening next door in molecular and cognitive neuroscience, and in a world which eagerly grabs and makes use of every little advance in machine anthropomorphism, and in which every little fact about life already has its digital incarnation. The hardware is already there for AI, the structure and function of the human brain is being mapped at ever finer resolution, and we have a culture which knows how to turn ideas into code. Eventually it will come together.
We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the
software, and Brain 2.0 isn’t far behind.
How much of the change from “the Net” to “the Web” to “Web 2.0” is actually noteworthy changes and how much is marketing? I’m not sure what precisely you mean by Brain 2.0, but I suspect that whatever definition you are using makes for a much wider gap between Brain and Brain 2.0 than the gap between The Web and The Web 2.0 (assuming that these analogies have any degree of meaning).
I wouldn’t even present that as a reason for caring. Superhuman AI is an issue of the near future, not the far future. Certainly an issue of the present century; I’d even say an issue of the next twenty years, and that’s supposed to be an upper bound. Big science is deconstructing the human brain right now, every new discovery and idea is immediately subject to technological imitation and modification, and we already have something like a billion electronic computers worldwide, networked and ready to run new programs at any time. We already went from “the Net” to “the Web” to “Web 2.0”, just by changing the software, and Brain 2.0 isn’t far behind.
Are you familiar with the state of the art in AI? If so, what evidence do you see for such rapid progress? Note that AI has been around for about 50 years, so your timeframe suggests we’ve already made 5⁄7 of the total progress that ever needs to be made.
Well, this probably won’t be Mitchell’s answer, but to me it’s obvious that an uploaded human brain is less than 50 years away (if we avoid civilization-breaking catastrophes), and modifications and speedups will follow. That’s a different path to AI than an engineered seed intelligence (and I think it reasonably likely that some other approach will succeed before uploading gets there), but it serves as an upper bound on how long I’d expect to wait for Strong AI.
There are many synergetic developments: Internet data centers as de facto supercomputers. New tools of intellectual collaboration spun off from the mass culture of Web 2.0. If you have an idea for a global cognitive architecture, those two developments make it easier than ever before to get the necessary computer time, and to gather the necessary army of coders, testers, and kibitzers.
Twenty years is a long time in AI. That’s long enough for two more generations of researchers to give their all, take the field to new levels, and discover the next level of problems to overcome. Meanwhile, that same process is happening next door in molecular and cognitive neuroscience, and in a world which eagerly grabs and makes use of every little advance in machine anthropomorphism, and in which every little fact about life already has its digital incarnation. The hardware is already there for AI, the structure and function of the human brain is being mapped at ever finer resolution, and we have a culture which knows how to turn ideas into code. Eventually it will come together.
How much of the change from “the Net” to “the Web” to “Web 2.0” is actually noteworthy changes and how much is marketing? I’m not sure what precisely you mean by Brain 2.0, but I suspect that whatever definition you are using makes for a much wider gap between Brain and Brain 2.0 than the gap between The Web and The Web 2.0 (assuming that these analogies have any degree of meaning).