I’m working on a top-level post about AI (you know what they say, write what you don’t know), and I’m wondering about the following question:
Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?
That is, many technologies are really slow when first developed, until a few cycles of Moore’s Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?
To clarify the question a bit, I would consider dividing software technologies into three categories:
Technologies developed while the necessary computing resources were still unavailable or too expensive, which flourished later when the resources became cheap enough. For example, Alan Turing famously devised a chess program which he could only run using paper and pencil.
Technologies that appeared very soon after the necessary computing resources became available and cheap enough, suggesting that the basic idea was fairly straightforward after all, and it was only necessary to give smart people some palpable incentive to think about it. Examples such as the first browsers and spreadsheets would be in this category.
Technologies for which the necessary computing resources had been cheaply available for a long time before someone finally came up with them, suggesting an extraordinary intellectual breakthrough. I cannot think of any such examples, and it doesn’t seem like anyone else in this thread can either.
This reinforces my cynical view of software technologies in general, namely that their entire progress in the last few decades has been embarrassingly limited considering the amount of intellectual power poured into them.
Here’s an interesting related thought experiment that reinforces my cynicism further. Suppose that some miraculous breakthrough in 1970 enabled the production of computers equally cheap, powerful, compact, and easily networked as we have today. What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn’t have by 1980?
Chess had steady algorithmic improvements on the same order as the gains from hardware: Deep Fritz and Rybka both got to better performance per FLOP than Deep Blue, etc. More generally, I think that looking at quantitative metrics (as opposed to whole new capabilities) like game performance, face recognition, image processing, etc, will often give you independent hardware and software components to growth.
An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.
That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.
According to one of the comments on the link I posted:
“in section 6.2.1 of his ‘Sorting and Searching,’ Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962” (Programming Pearls 2nd edition, “Writing Correct Programs”, section 4.1, p. 34).
Besides, it’s not like higher-level languages are immune to subtle bugs, though in general they’re less susceptible to them.
edit: Also, if you’re working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you’d have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn’t break it.
That’s terrifying in the context of get-it-right-the-first-time AI.
I hope there will be some discussion of why people think it’s possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.
Would the first spreadsheet (VisiCalc) or the first browser (Mosaic) fit your bill? As far as I know, they didn’t face difficult hardware bottlenecks when they appeared.
VisiCalc is a great example, but Mosaic was hardly the first browser. Nelson and Engelbart certainly had hypertext browsers before. I’m not entirely sure that they had graphics, but I think so. Have you seen Engelbart’s 1968 demo? (ETA: I’m not sure that Engelbart’s demo counts, but even if it doesn’t, he clearly cared about both hypertext and graphics, so he probably did it in the following decade or two)
I’d guess that the computing power devoted to the mouse in a graphical environment is always small compared to that devoted to the screen, and thus it should be a good example. (if one used a mouse to manipulate an array of characters, as sounds like a good idea for a spreadsheet, the mouse might be relatively expensive, but that’s not what Engelbart did in ’68)
The mouse and the browser (and magfrump’s similar example of AIM) are probably examples of a phenomenon that generalizes your original question, where the bottleneck to deployment was something other than computer power.
What about BitTorrent or P2P file transfers in general? Anonymous peer to peer seems to have not emerged until 2001, or peer to peer in general until November 1998. That’s a bit too far back for me to have any idea what computers were like but peer to peer file transfer is an amazing software development which could be implemented on any computers that can transfer files—at least as early as 1977.
How significant of a technology are you thinking of?
For example, I would guess that most video game emulators came about when computers were much faster than the games they were emulating—if it weren’t the case that fast computers were cheaper than the emulated consoles emulators wouldn’t be very popular.
Further, I can guarantee you that computers easily have more power than video game consoles, so any emulator produced of the latest generation of console was written when computers had far more power than necessary.
So: Does a new emulator count? It’s a specific technology that is developed in a fast environment.
Does an old emulator count? Emulators in general aren’t new technology at all.
Does an instant messenger count? Predecessors existed in times when text content was a big deal, but I would be mildly surprised to hear that the original AIM (or whatever the first instant messenger program was) was created at a time when text-over-the-internet was a big stress on computers.
My experience contradicts that. The video game emulators I know of were written at about the time that the fastest of desktop cpus was just fast enough to run some of the games usually near realtime, with plenty of others still too slow to be playable. Emulated games may not become popular until you can run them reliably with plenty of leeway, but people start trying as soon as it’s barely feasible. Though it is also true that the barely feasible point is when the computers are a bunch faster than the consoles, since you lose an order of magnitude or so in emulation overhead.
Relatedly, I wish I could remember what I read recently about comparing the performance of a 2000s algorithm for a particular problem (factoring?) on 1980s hardware to a 1980s algorithm on 2000s hardware. It might’ve been on Accelerating Future.
There are lots of examples of improved algorithms, such as your example of factoring, the similarly timed example of chess algorithms, and the much earlier merge sort and simplex algorithms. But in none of these cases did the algorithm completely solve the problem; there are always harder instances that we care about. This is particularly clear with factoring, which is adversarial. (you might count human chess as a solved problem, though)
Interesting question. I can’t think of any examples. Do you want to argue that hardware is the limiting factor, the bottleneck of progress in IT? Thinking about your question for five minutes has made this conclusion much more plausible to me.
I’m working on a top-level post about AI (you know what they say, write what you don’t know), and I’m wondering about the following question:
Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?
That is, many technologies are really slow when first developed, until a few cycles of Moore’s Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?
To clarify the question a bit, I would consider dividing software technologies into three categories:
Technologies developed while the necessary computing resources were still unavailable or too expensive, which flourished later when the resources became cheap enough. For example, Alan Turing famously devised a chess program which he could only run using paper and pencil.
Technologies that appeared very soon after the necessary computing resources became available and cheap enough, suggesting that the basic idea was fairly straightforward after all, and it was only necessary to give smart people some palpable incentive to think about it. Examples such as the first browsers and spreadsheets would be in this category.
Technologies for which the necessary computing resources had been cheaply available for a long time before someone finally came up with them, suggesting an extraordinary intellectual breakthrough. I cannot think of any such examples, and it doesn’t seem like anyone else in this thread can either.
This reinforces my cynical view of software technologies in general, namely that their entire progress in the last few decades has been embarrassingly limited considering the amount of intellectual power poured into them.
Here’s an interesting related thought experiment that reinforces my cynicism further. Suppose that some miraculous breakthrough in 1970 enabled the production of computers equally cheap, powerful, compact, and easily networked as we have today. What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn’t have by 1980?
Chess had steady algorithmic improvements on the same order as the gains from hardware: Deep Fritz and Rybka both got to better performance per FLOP than Deep Blue, etc. More generally, I think that looking at quantitative metrics (as opposed to whole new capabilities) like game performance, face recognition, image processing, etc, will often give you independent hardware and software components to growth.
Well, I’d guess our video game library is larger...
An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.
e.g. see: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html where a bug in a popular published version of the search remained undetected for decades.
This kind of thing is particularly worrying in the context of AI, which may well need to be exactly right the first time!
That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.
According to one of the comments on the link I posted:
“in section 6.2.1 of his ‘Sorting and Searching,’ Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962” (Programming Pearls 2nd edition, “Writing Correct Programs”, section 4.1, p. 34).
Besides, it’s not like higher-level languages are immune to subtle bugs, though in general they’re less susceptible to them.
edit: Also, if you’re working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you’d have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn’t break it.
That’s terrifying in the context of get-it-right-the-first-time AI.
I hope there will be some discussion of why people think it’s possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.
Would the first spreadsheet (VisiCalc) or the first browser (Mosaic) fit your bill? As far as I know, they didn’t face difficult hardware bottlenecks when they appeared.
VisiCalc is a great example, but Mosaic was hardly the first browser. Nelson and Engelbart certainly had hypertext browsers before. I’m not entirely sure that they had graphics, but I think so. Have you seen Engelbart’s 1968 demo? (ETA: I’m not sure that Engelbart’s demo counts, but even if it doesn’t, he clearly cared about both hypertext and graphics, so he probably did it in the following decade or two)
Speaking of Engelbart, how about the mouse as an example? Did that take a nontrivial amount of the computing power when first demoed, or not?
I’d guess that the computing power devoted to the mouse in a graphical environment is always small compared to that devoted to the screen, and thus it should be a good example. (if one used a mouse to manipulate an array of characters, as sounds like a good idea for a spreadsheet, the mouse might be relatively expensive, but that’s not what Engelbart did in ’68)
The mouse and the browser (and magfrump’s similar example of AIM) are probably examples of a phenomenon that generalizes your original question, where the bottleneck to deployment was something other than computer power.
What about BitTorrent or P2P file transfers in general? Anonymous peer to peer seems to have not emerged until 2001, or peer to peer in general until November 1998. That’s a bit too far back for me to have any idea what computers were like but peer to peer file transfer is an amazing software development which could be implemented on any computers that can transfer files—at least as early as 1977.
How significant of a technology are you thinking of?
For example, I would guess that most video game emulators came about when computers were much faster than the games they were emulating—if it weren’t the case that fast computers were cheaper than the emulated consoles emulators wouldn’t be very popular. Further, I can guarantee you that computers easily have more power than video game consoles, so any emulator produced of the latest generation of console was written when computers had far more power than necessary.
So: Does a new emulator count? It’s a specific technology that is developed in a fast environment. Does an old emulator count? Emulators in general aren’t new technology at all. Does an instant messenger count? Predecessors existed in times when text content was a big deal, but I would be mildly surprised to hear that the original AIM (or whatever the first instant messenger program was) was created at a time when text-over-the-internet was a big stress on computers.
My experience contradicts that. The video game emulators I know of were written at about the time that the fastest of desktop cpus was just fast enough to run some of the games usually near realtime, with plenty of others still too slow to be playable. Emulated games may not become popular until you can run them reliably with plenty of leeway, but people start trying as soon as it’s barely feasible. Though it is also true that the barely feasible point is when the computers are a bunch faster than the consoles, since you lose an order of magnitude or so in emulation overhead.
Relatedly, I wish I could remember what I read recently about comparing the performance of a 2000s algorithm for a particular problem (factoring?) on 1980s hardware to a 1980s algorithm on 2000s hardware. It might’ve been on Accelerating Future.
There are lots of examples of improved algorithms, such as your example of factoring, the similarly timed example of chess algorithms, and the much earlier merge sort and simplex algorithms. But in none of these cases did the algorithm completely solve the problem; there are always harder instances that we care about. This is particularly clear with factoring, which is adversarial. (you might count human chess as a solved problem, though)
Many “killer apps” would fit your criteria, I think. VisiCalc, AutoCAD, Mosaic.
Interesting question. I can’t think of any examples. Do you want to argue that hardware is the limiting factor, the bottleneck of progress in IT? Thinking about your question for five minutes has made this conclusion much more plausible to me.