“Hundreds of years off” is a common estimation people give for technologies that seem really complicated and hard to make with our present knowledge. I’ve always found this fairly ridiculous; it’s pretty much unprecedented in human history. When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it? The only cases I’m aware of that any sort of technology has been in-development for that long are if you count things like heavier-than-air flight, where we spent hundreds of years not applying the scientific method to understanding the problem and just threw up solutions willy nilly.
As far as I know it’s only pretty recently that we’ve actually started applying the scientific method to the whole people dying problem, rather than just tossing up solutions like “invent the Elixir of Life.”
In June 1768 Eramus Darwin told Josiah Wedgewood that Edgeworth had ‘nearly completed a Waggon drawn by Fire’. In modern parlance, a motorcar. It didn’t work, and Edgeworth was a bit of a dreamer; I don’t know how far he actually got. Nevertheless, people saw the technical possibility of motorised road transport that early. Others were also attempting the technology. The motorcar is a good example of a slow burning technological development, taking perhaps 120 years from being obviously possible to being on sale.
This does meet the specifications I had in mind, save that 120 years is less than “hundreds.” It’s the slowest example I’m aware of though, and its inception began before the second industrial revolution, and quite early into the first.
The last 200 years have seen the development of most of the technology the human race has ever created. The first industrial research lab was founded less than 140 years ago. 300 years ago, we were only barely engaged in the process of applying dedicated empirical research to making new stuff. In terms of predicting future technological development, we really don’t have hundreds of years of meaningful data to extrapolate from.
When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it?
The first that came to my mind was the photovoltaic effect, discovered by Alexandre Edmond Becquerel in 1839. Even today it takes massive subsidies to make photovoltaics competitive. And we are not even close to the energy efficiency of photosynthesis.
There’s a big difference though, between “This technology has not been realized” and “this technology is not cost competitive with other technologies for similar purposes.”
I’d also mention Fermat’s Last Theorum as a very specific example, since it took over 300 years to prove. It’s rare, but it’s certainly been known to happen.
When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it?
Many times, unless you weasel pretty strongly with “any understandings of the workings behind”. Science fiction has been around a long time. Mary Shelly, writing in 1818 and aware of Galvani’s experiments with electricity and frogs, conceived of applying this to reanimation of the dead. Jules Verne, writing in 1865, conceived of traveling to the moon with a space cannon.
I think you’d have to stretch the meaning of scientific understanding pretty far to claim that the 19th-century writers speculating about reanimating the dead with electricity understood what they were talking about.
Besides, if I’m remembering Frankenstein right, there’s no clear method of reanimation given but it’s at least partly occult: Shelley name-drops several famous alchemists. She might have been inspired at some level by Galvani’s experiments, but the procedure involving a dramatic lightning storm and “give my creation life!” is a cinematic invention—and arguably has a certain occult flavor in its own right, given all the associations with divine fire that lightning’s picked up in culture.
What about: “you’d have to stretch the meaning of scientific understanding pretty far to claim that early 21st-century people speculating about reanimating the dead with uploading understood what they were talking about.”
I could easily see the people who figure out whole brain emulation saying the same of us.
Nah. I can see the scanning procedure needed for whole brain emulation turning out to require some unspecified technology that’s way too difficult for 21st-century science, or Moore’s Law running out of steam before we reach the densities needed to do the actual emulation, but either one would be a Verne-type error; I can’t see a category error on the order of electrical impulse ⇒ true resurrection happening unless we’re very badly wrong about some very fundamental features of how the brain works.
I agree with Nornagest’s interpretation regarding whether people in the 19th century had any idea what they were talking about with respect to reanimating the dead. With regards to space cannons, those turned out to be unworkable and we never made them at all. The gap from “maybe we could get to the moon with some sort of space-rocket” to actually making spaceships was much shorter.
There are plenty of cases where speculative technologies have turned out to be unworkable and were never actually put into use, but that’s an error of a different kind than speculating that a specific technology is hundreds of years off.
Still, there are projects of space rockets as early as 1881 (Nikolay Kibalich), and maybe earlier. Tsiolkovskiy has published his formula for estimating required amount of fuel in 1897, 60 years before first artificial satellite.
In 1822 there were some designs of Babbage’s Difference Engine. His designs dated by 1847-1849 were implemented and worked—a century and a half later. First Turing complete computer to be built was apparently ENIAC… in 1946.
So, for primary wishes of humanity, one century from working blueprint to implementation (of a more efficient plan) is not unprecedented. Of course, we don’t know whether our current cryonics is theoretically enough for preservation....
Due to lack of ways to calibrate, “hundreds of years” cannot be taken as a precise prediction, of course. On the other hand, “we have a general idea” cannot be taken as a prediction, either. After all, fusion power plant could turn out to be simpler than reviving cryonics patients.
When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it?
http://www.alcor.org/cases.html − 1967, first human cryopreservation. That was 45 years ago. We’ve already been nearly half a century, and I haven’t seen any research that suggests that revivification is likely to occur in the next decade. Calling it a century from the first human cryopreservation, a mere doubling of the time we’ve already waited, does not seem at all like an unreasonable assertion.
Now that I think about it, computing machines might be another good example. To the Wikipedia-mobile, Batman !
Abacuses (or abacii or however you spell them) existed in ancient Babylon, but were obviously quite primitive. Mechanical calculators were developed in the 17th century AD. Charles Babbage designed his Difference Engine in 1822, but the technology to build one did not yet exist (or if it did, it was more expensive than a battleship, which amounts to the same thing). Convergent improvements in computing technology as well as mechanical engineering led to the building of mechanical analytical engines a few decades later. Electronic computers began showing up in the 20th century; and today, we have the Internet for people to twitter on.
I don’t think anywhere along the line though, anyone said “with sufficiently advanced engineering we might create a network of electrical difference engines capable of communicating complex packages of information across the world, but this would take hundreds of years to develop.”
Some technologies are the results of long chains of developments, but I’m not aware of any cases of people conceiving of specific technologies hundreds of years down the chain based on any meaningful understanding of the principles at work.
I’m not aware of any cases of people conceiving of specific technologies hundreds of years down the chain based on any meaningful understanding of the principles at work.
I would assume that’s because, when you’re working hundreds of years down the chain, chances are very high that you’ll run in to some unexpected obstacle, and the final result will look sufficiently different that we dismiss the previous idea as having missed the true target (for instance, 1800s “space cannons” vs modern rocket ships).
That said, Tesla and Fermat both strike me as potential examples. It’s unclear whether they were making assertions without evidence to back themselves up, or if they really had a decent insight in to what we’d be doing centuries down the road, though. Tesla is largely considered crazy, but Fermat fascinated people long enough that they spent a few hundred years proving his last theorum (hey, an example of a 200 year waiting period!)
You might have a point about information networks; I don’t know enough about the history of computing to know whether Babbage or Turing or someone of that caliber ever proposed them. But there was definitely a need for difference engines and automated calculators, for use in financial and navigational calculations. This is why the technology developed so quickly (relatively speaking) once funding became available and mechanical engineering improved to the point where difference engines could actually be built.
Certainly, I would expect that if I follow the chain of thoughts that led to a technology back through time, the understanding of the underlying principles will grow less and less meaningful.
I don’t have a principled way of establishing where to draw the line along that continuum, and in the absence of such a principled threshold I am not certain how to distinguish this from a “No True Scotsman” argument.
For anyone else to distinguish it from a No True Scotsman argument would probably require me to have been much more precise than I’m in the habit of being in regular conversation, but I have a pretty solid idea of what I meant when I made my original claim, and I’d be able to tell if a particular example met my specifications. Of course, if there were anything really obvious that qualified, I’d be likely to have thought of it before and not made the claim in the first place.
Anyway, it’s possible that there are specific technologies that were conceived of hundreds of years in advance of the point where it was possible to implement them according to the specifications I have in mind, and I’m not aware of it, but if they were conceived of before the industrial revolution, I don’t think we can take it as a very meaningful precedent to generalize from now.
What about industrial steam power ? Hero of Alexandria developed a steam-powered toy back in the first century AD, but it took a millennium and a half before a true steam engine was developed and harnessed for useful work.
If they had come up with the idea, say “we could use this steam power effect to create machines which do work without using the power of living creatures,” but then failed to work out how to do that, I’d say that would count, but as far as I know they did not. Not noticing avenues for technological development is not the same as conceiving of specific technologies but taking a long time to successfully implement them.
Didn’t Hero of Alexandria attach his steam engine to some kind of a door-opening mechanism ? Granted, he probably wasn’t too concerned with “doing work without using the power of living creatures”, what with all the cheap slaves hanging around, but still, at least he knew it could be done...
“Hundreds of years off” is a common estimation people give for technologies that seem really complicated and hard to make with our present knowledge. I’ve always found this fairly ridiculous; it’s pretty much unprecedented in human history. When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it? The only cases I’m aware of that any sort of technology has been in-development for that long are if you count things like heavier-than-air flight, where we spent hundreds of years not applying the scientific method to understanding the problem and just threw up solutions willy nilly.
Arguably “immortality” has been on the back-burner for a while.
As far as I know it’s only pretty recently that we’ve actually started applying the scientific method to the whole people dying problem, rather than just tossing up solutions like “invent the Elixir of Life.”
No specific tech for it.
In June 1768 Eramus Darwin told Josiah Wedgewood that Edgeworth had ‘nearly completed a Waggon drawn by Fire’. In modern parlance, a motorcar. It didn’t work, and Edgeworth was a bit of a dreamer; I don’t know how far he actually got. Nevertheless, people saw the technical possibility of motorised road transport that early. Others were also attempting the technology. The motorcar is a good example of a slow burning technological development, taking perhaps 120 years from being obviously possible to being on sale.
This does meet the specifications I had in mind, save that 120 years is less than “hundreds.” It’s the slowest example I’m aware of though, and its inception began before the second industrial revolution, and quite early into the first.
The last 200 years have seen the development of most of the technology the human race has ever created. The first industrial research lab was founded less than 140 years ago. 300 years ago, we were only barely engaged in the process of applying dedicated empirical research to making new stuff. In terms of predicting future technological development, we really don’t have hundreds of years of meaningful data to extrapolate from.
The first that came to my mind was the photovoltaic effect, discovered by Alexandre Edmond Becquerel in 1839. Even today it takes massive subsidies to make photovoltaics competitive. And we are not even close to the energy efficiency of photosynthesis.
I bet there are a lot of other examples.
Citation? Wikipedia gives Photosynthetic efficiency at under 11% and Solar cell efficiency up to 40% for research-grade photocells and one company claims 24% efficiency for their commercial cells.
Certainly we’re not close to the energy/cost efficiency of photosynthesis.
There’s a big difference though, between “This technology has not been realized” and “this technology is not cost competitive with other technologies for similar purposes.”
I’d also mention Fermat’s Last Theorum as a very specific example, since it took over 300 years to prove. It’s rare, but it’s certainly been known to happen.
I wouldn’t call Fermat’s Last Theorem a technology though.
Many times, unless you weasel pretty strongly with “any understandings of the workings behind”. Science fiction has been around a long time. Mary Shelly, writing in 1818 and aware of Galvani’s experiments with electricity and frogs, conceived of applying this to reanimation of the dead. Jules Verne, writing in 1865, conceived of traveling to the moon with a space cannon.
I think you’d have to stretch the meaning of scientific understanding pretty far to claim that the 19th-century writers speculating about reanimating the dead with electricity understood what they were talking about.
Besides, if I’m remembering Frankenstein right, there’s no clear method of reanimation given but it’s at least partly occult: Shelley name-drops several famous alchemists. She might have been inspired at some level by Galvani’s experiments, but the procedure involving a dramatic lightning storm and “give my creation life!” is a cinematic invention—and arguably has a certain occult flavor in its own right, given all the associations with divine fire that lightning’s picked up in culture.
What about: “you’d have to stretch the meaning of scientific understanding pretty far to claim that early 21st-century people speculating about reanimating the dead with uploading understood what they were talking about.”
I could easily see the people who figure out whole brain emulation saying the same of us.
Nah. I can see the scanning procedure needed for whole brain emulation turning out to require some unspecified technology that’s way too difficult for 21st-century science, or Moore’s Law running out of steam before we reach the densities needed to do the actual emulation, but either one would be a Verne-type error; I can’t see a category error on the order of electrical impulse ⇒ true resurrection happening unless we’re very badly wrong about some very fundamental features of how the brain works.
I agree with Nornagest’s interpretation regarding whether people in the 19th century had any idea what they were talking about with respect to reanimating the dead. With regards to space cannons, those turned out to be unworkable and we never made them at all. The gap from “maybe we could get to the moon with some sort of space-rocket” to actually making spaceships was much shorter.
There are plenty of cases where speculative technologies have turned out to be unworkable and were never actually put into use, but that’s an error of a different kind than speculating that a specific technology is hundreds of years off.
Still, there are projects of space rockets as early as 1881 (Nikolay Kibalich), and maybe earlier. Tsiolkovskiy has published his formula for estimating required amount of fuel in 1897, 60 years before first artificial satellite.
In 1822 there were some designs of Babbage’s Difference Engine. His designs dated by 1847-1849 were implemented and worked—a century and a half later. First Turing complete computer to be built was apparently ENIAC… in 1946.
So, for primary wishes of humanity, one century from working blueprint to implementation (of a more efficient plan) is not unprecedented. Of course, we don’t know whether our current cryonics is theoretically enough for preservation....
Due to lack of ways to calibrate, “hundreds of years” cannot be taken as a precise prediction, of course. On the other hand, “we have a general idea” cannot be taken as a prediction, either. After all, fusion power plant could turn out to be simpler than reviving cryonics patients.
http://www.alcor.org/cases.html − 1967, first human cryopreservation. That was 45 years ago. We’ve already been nearly half a century, and I haven’t seen any research that suggests that revivification is likely to occur in the next decade. Calling it a century from the first human cryopreservation, a mere doubling of the time we’ve already waited, does not seem at all like an unreasonable assertion.
Now that I think about it, computing machines might be another good example. To the Wikipedia-mobile, Batman !
Abacuses (or abacii or however you spell them) existed in ancient Babylon, but were obviously quite primitive. Mechanical calculators were developed in the 17th century AD. Charles Babbage designed his Difference Engine in 1822, but the technology to build one did not yet exist (or if it did, it was more expensive than a battleship, which amounts to the same thing). Convergent improvements in computing technology as well as mechanical engineering led to the building of mechanical analytical engines a few decades later. Electronic computers began showing up in the 20th century; and today, we have the Internet for people to twitter on.
I don’t think anywhere along the line though, anyone said “with sufficiently advanced engineering we might create a network of electrical difference engines capable of communicating complex packages of information across the world, but this would take hundreds of years to develop.”
Some technologies are the results of long chains of developments, but I’m not aware of any cases of people conceiving of specific technologies hundreds of years down the chain based on any meaningful understanding of the principles at work.
I would assume that’s because, when you’re working hundreds of years down the chain, chances are very high that you’ll run in to some unexpected obstacle, and the final result will look sufficiently different that we dismiss the previous idea as having missed the true target (for instance, 1800s “space cannons” vs modern rocket ships).
That said, Tesla and Fermat both strike me as potential examples. It’s unclear whether they were making assertions without evidence to back themselves up, or if they really had a decent insight in to what we’d be doing centuries down the road, though. Tesla is largely considered crazy, but Fermat fascinated people long enough that they spent a few hundred years proving his last theorum (hey, an example of a 200 year waiting period!)
You might have a point about information networks; I don’t know enough about the history of computing to know whether Babbage or Turing or someone of that caliber ever proposed them. But there was definitely a need for difference engines and automated calculators, for use in financial and navigational calculations. This is why the technology developed so quickly (relatively speaking) once funding became available and mechanical engineering improved to the point where difference engines could actually be built.
Certainly, I would expect that if I follow the chain of thoughts that led to a technology back through time, the understanding of the underlying principles will grow less and less meaningful.
I don’t have a principled way of establishing where to draw the line along that continuum, and in the absence of such a principled threshold I am not certain how to distinguish this from a “No True Scotsman” argument.
For anyone else to distinguish it from a No True Scotsman argument would probably require me to have been much more precise than I’m in the habit of being in regular conversation, but I have a pretty solid idea of what I meant when I made my original claim, and I’d be able to tell if a particular example met my specifications. Of course, if there were anything really obvious that qualified, I’d be likely to have thought of it before and not made the claim in the first place.
Anyway, it’s possible that there are specific technologies that were conceived of hundreds of years in advance of the point where it was possible to implement them according to the specifications I have in mind, and I’m not aware of it, but if they were conceived of before the industrial revolution, I don’t think we can take it as a very meaningful precedent to generalize from now.
What about industrial steam power ? Hero of Alexandria developed a steam-powered toy back in the first century AD, but it took a millennium and a half before a true steam engine was developed and harnessed for useful work.
If they had come up with the idea, say “we could use this steam power effect to create machines which do work without using the power of living creatures,” but then failed to work out how to do that, I’d say that would count, but as far as I know they did not. Not noticing avenues for technological development is not the same as conceiving of specific technologies but taking a long time to successfully implement them.
Didn’t Hero of Alexandria attach his steam engine to some kind of a door-opening mechanism ? Granted, he probably wasn’t too concerned with “doing work without using the power of living creatures”, what with all the cheap slaves hanging around, but still, at least he knew it could be done...