It’s not a brain in a box in a basement—and it’s not one grand architectural insight—but I think the NSA shows how a secretive organisation can get ahead and stay ahead—if it is big and well funded enough. Otherwise, public collaboration tends to get ahead and stay ahead, along similar lines to those Robin mentions.
Google, Apple, Facebook etc. are less-extreme versions of this kind of thing, in that they keep trade secrets which give them advantages—and don’t contribute all of these back to the global ecosystem. As a result they gradually stack up know-how that others lack. If they can get enough of that, then they will gradually pull ahead—if they are left to their own devices.
The issue of whether a company will eventually pull ahead is an issue which has quite a bit to do with anti-trust legislation—as I discuss in One Big Organism.
The issue of whether one government will eventually pull ahead is a bit different. There’s no government-level anti-trust legislation. However expansionist governments are globally frowned upon.
I don’t think there are too many other significant players besides companies and governments.
The “silver bullet” idea doesn’t seem to be worth too much. As Eray says: “Every algorithm encodes a bit of intelligence”. We know that advanced intelligence necessarily highly complex. You can’t predict a complex world without being that complex yourself. Of course, human intelligence might be relatively simple—in which case it might only take a few leaps to get to it. The history of machine intelligence fairly strongly suggests a long gradual slog to me—but it is at least possible to argue that people have been doing it all wrong so far.
It’s not a brain in a box in a basement—and it’s not one grand architectural insight—but I think the NSA shows how a secretive organisation can get ahead and stay ahead—if it is big and well funded enough. Otherwise, public collaboration tends to get ahead and stay ahead, along similar lines to those Robin mentions.
Google, Apple, Facebook etc. are less-extreme versions of this kind of thing, in that they keep trade secrets which give them advantages—and don’t contribute all of these back to the global ecosystem. As a result they gradually stack up know-how that others lack. If they can get enough of that, then they will gradually pull ahead—if they are left to their own devices.
The issue of whether a company will eventually pull ahead is an issue which has quite a bit to do with anti-trust legislation—as I discuss in One Big Organism.
The issue of whether one government will eventually pull ahead is a bit different. There’s no government-level anti-trust legislation. However expansionist governments are globally frowned upon.
I don’t think there are too many other significant players besides companies and governments.
The “silver bullet” idea doesn’t seem to be worth too much. As Eray says: “Every algorithm encodes a bit of intelligence”. We know that advanced intelligence necessarily highly complex. You can’t predict a complex world without being that complex yourself. Of course, human intelligence might be relatively simple—in which case it might only take a few leaps to get to it. The history of machine intelligence fairly strongly suggests a long gradual slog to me—but it is at least possible to argue that people have been doing it all wrong so far.