Downvoted for groundless assumptions and for failing to google the basics. There are more, not fewer physical documents produced, because it is easier to produce them, the number of bank tellers has actually increased, etc.
Third, there is a misconception that highly theoretical tasks done by skilled experts will be among the last to go. But due to their theoretical nature such tasks are fairly easy represent virtually.
Third, there is a misconception that highly theoretical tasks done by skilled experts will be among the last to go. But due to their theoretical nature such tasks are fairly easy represent virtually.
Actually I think he may be right, since this is basically a consequence of Moravec’s paradox.
“The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived.… As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come. “[2]
But why might this be so?
“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”[4]
I agree with the theory, but not with the practical conclusions. Yes, we invented automatic provers before automatic gardeners, because there are were some harder problems involved. But at this point they both seem with reach—for example, take a look at progress in self-driving cars just over the last 6 years. Driving is very similar to other natural problems. Cooking example is just silly in the first place. Well, maybe creative cooking will be harder, but cooking by emulation should be really easy for machines.
… not to mention that a lot of food is being produced by machines, or at least in a heavily automated environment. I’ve visited food factories, I don’t remember seeing many cooks. Even home cooking has been automated to a certain degree.
All these labor saving devices, even factories, are integrated by humans. While the productivity per worker skyrockets (fewer workers needed per X units of output), there is no factory that runs without people who do generally very easy tasks that are very difficult to automate.
The summer after high school I worked in a spray bottle factory. Yes, we made the spray nozzles like come on a bottle of windex. My job was to keep the bins full of the little parts that fed into the machine that assembled them. I also helped unload the boxes of the parts from the carts and stacked them near the bin where they needed to go. Someone else somewhere had a job to handle the “raw” plastic for machine that melted and molded the parts I needed. Someone else put the different parts in different boxes sorted for the cart driver.
These tasks were of course absurdly easy for any human to do with about five minutes of training. Somehow automating all this together into a single factory chain would have presented enormous challenges though. Because the labor is so cheap I could easily imagine that factory will run the same way for the next fifty years.
I suspect the driving forces behind automating that sort of thing will ultimately be, not labor costs, but the relative slowness, messiness, and unreliability of humans.
That said, I also expect that the technology that can do those sorts of jobs more quickly, cleanly, and reliably than humans will be developed for different applications where minimally trained human labor just isn’t practical (say, automated underwater mining) and then applied to other industries once it’s gotten pretty good.
Maybe but its still easily 50 years away. People are “messy” but they are so cheap and you need so few of them—there is no capital tied up in them at all its just a month-to-month expense. Even if you lease equipment you are still paying for the cost of the capital tied up in it. The diminishing returns for automating such a small cost will ensure its continuity for quite some time I think.
Predictions in years are less and less meaningful to me as I go along. I’d give .6 confidence that we’re no more than 5 tech-generations away from being able to build a fully automated mining facility (just to pick a concrete example), and no more than 3 generations from there to being able to build one in a way that would be cost-effective (given current-day labor costs and raw materials prices) for at least some application… perhaps underwater mining of rare earths.
I also expect that along the way, selected raw materials prices will increase enough (in inflation-adjusted currency) that using current-day prices is absurdly conservative. Then again, I also expect that along the way we’ll see several failures of such equipment that cause as much as half a commute-year (current-day) of environmental damage, which might set the whole project back by decades. So, who knows?
A commute-year, incidentally, is a measure of risk (e.g., death and property/environmental damage) equal to that caused by people commuting to and from their jobs in a given year. My guess is that half a commute-year is typically more than enough to cause the majority of Americans to insist that a new project is way too dangerous to even consider. (Of course, that doesn’t apply to the project of actually commuting to work.)
1 Maybe I should clarify: Are the tasks previously done by bank tellers becoming automated? Yes. The fact that the number bank tellers has increased does not invalidate my statement. If there were no internet banking or ATMs then increase would be much larger right? So its trivial to see that the number of bank tellers can increase at the same time as bank teller jobs are lost to automated systems.
2 I’ll give you an extreme one. I am a few steps away of earning a degree in theoretical physics specializing in quantum information theory. Theoretical quantum information theory is nothing but symbol manipulation in a framework on existing theorems of linear algebra. With enough resources pretty much all of the research could be done by computers alone. Algorithms could in principle put mathematical statements together, other algorithms testing the meaningfulness of the output and so on.. but that a discussion interesting enough to have its own thread. I just mean that theoretical work is not immune to automation.
Organize all the known mathematics and physics of 1915 in a computer running the right algorithms, the ask it: ‘what is gravity?’ Would it output General theory of relativity? I think so.
Organize all the known mathematics and physics of 1915 in a computer running the right algorithms, the ask it: ‘what is gravity?’ Would it output General theory of relativity? I think so.
You have fallen victim to the hindsight bias. The parameter space of the ways of reconciling Special Relativity with Newtonian gravity is quite large, even assuming that this goal would have occurred to anyone but Einstein at that time (well, Hilbert did the math independently, after communicating with Einstein for some time). Rejecting the implicit and unquestionable idea of a fixed background spacetime was an extreme leap of genius. The “right algorithms” would probably have to be the AGI-level ones.
I am a few steps away of earning a degree in theoretical physics specializing in quantum information theory. Theoretical quantum information theory is nothing but symbol manipulation in a framework on existing theorems of linear algebra.
“Theoretical quantum information theory” is math, not a natural science, and math is potentially easier to automate. Still, feel free to research the advances in automated theorem proving, and, more importantly, in automated theorem stating, a much harder task. How would a computer know what theorems are interesting?
1 Hindsight bias? Quite a diagnosis there. I never specified the level of those algorithms.
2 Which part of theoretical physics is not math? Experiments confirm or reject theoretical conclusions and points theoretical work in different directions. But that theoretical work is in the end symbol processing—something that computers are pretty good at. There could be a variety of ways for a computer to decide if a theorem is interesting just as for a human. Scope, generality and computability of the theorems could be factors. Input Newtonian mechanics and the mathematics of 1850 and output Hamiltonian mechanics just based on the generality of that framework.
1 Hindsight bias? Quite a diagnosis there. I never specified the level of those algorithms.
I have, in my reply: probably AGI-level, i.e. too far into the haze of the future to be considered seriously.
2 Which part of theoretical physics is not math?
Probably the 1% that counts the most (I agree, 99% of theoretical physics is math, as I found out the hard way). It’s finding the models that make the old experiments make sense and that make new interesting predictions that turn out to be right that is the mysterious part. How would you program a computer that can decide, on its own, that adding the gauge freedom to the Maxwell equations would actually make them simpler and lay foundations for nearly all of modern high-energy physics? That the Landau pole is not an insurmountable obstacle, despite all the infinities? That 2D models like graphene are worth studying? That can resolve the current mysteries, like the High Tc superconductivity, the still mysterious foundations of QM, the cosmological mysteries of dark matter and dark energy, the many problems in chemistry, biology, society etc.? Sure, it is all “symbol manipulation”, but so is everything humans do, if you agree that we are (somewhat complicated) Turing machines and Markov chains. If you assert that it is possible to do all this with anything below an AGI-level complexity, I hope that you are right, but I am extremely skeptical.
Mathematicians would probably call much less of what physicists do “math” than the physicists. Let me focus on statistical mechanics. A century ago, physicists made assertions that mathematicians could understand, like the central limit theorem and ergodicity. There was debate about whether these were mathematical or physical truths, but it is fine to take them as assumptions and do mathematics. This happens today with spin glasses. But physicists also talk about universality. I suppose that’s a precise claim, though rather strong, but the typical prototype of a universality class is a conformal field theory and mathematicians can’t make heads or tails of that. The calculations about CFT may look like math, but the rules aren’t formal.
PS—bank tellers per capita fell from 1998 to 2008, though not much.
I agree that the conceptual (non-simply-symbol-processing) part of theoretical physics is the tricky part to automate, and even if I am willing to accept that that last 1% will be kept in the monopoly of human beings, but then that’s it; theoretical physics will asymptotically reduce to that 1% and stay there until AGI arrives. Its not bound to change over night, but the change will be the product of many small changes where computers start to aid us not by just doing the calculations and simulations but more advanced tasks where we can input sets of equations from two different sub-field and letting the computers using evolutionary algorithms try different combinations, operate on them and so on and find links. The process could end where a joint theory in a common mathematical framework succeeds to derive the phenomena in both sub fields.
EDIT: Have to add that it feels a bit awkward to argue against the future necessity of my “profession”..
Are you a grad student? Because I don’t know much about theoretical physics, but I find it very hard to believe much academic research could be automated.
I’m a post-doc doing research on computational linguistics. I can’t imagine automating my work.
Yes I am, and I’ll soon start looking for PhD-positions either in physics or some interdisciplinary field of interest. I know I seem a bit over-optimistic, and that such radical changes may take maybe at least 30-50 years, but I’d guess most of us will be alive by then so its still relevant. My main point is that step by step theoretical tasks will move to the space of computation and the job of the theoretician will evolve to something else. If one day our computers in our computer aided research starts to output suggestions for models, or links between sets of data we haven’t thought about comparing wouldn’t those results actually be a collaboration between us and that system? You maybe cant imagine automating everything you do, but I’m sure you can imagine parts of your research being automated. That would allow you to use more mental resources for the conceptual and creative part of the research and so on..
Well I’m working on a sub-problem of artificial intelligence. So the task of designing a system to do my work is harder than the end-goal my work is trying to achieve!
By human-equivalent i’d guess you mean equivalent in if not all, but in many different aspects of human intelligence. I wouldn’t dare to have an opinion at the moment.
Organize all the known mathematics and physics of 1915 in a computer running the right algorithms, the ask it: ‘what is gravity?’ Would it output General theory of relativity? I think so.
Do you have any idea how to do it? What does organising the mathematics and physics actually look like? What are the right algorithms?
Nobody probably doubts that theoretical work can be done by machines. But your original claim was stronger: that these tasks are “fairly easy to represent virtually” and that the conjecture that they are going to be the last to go is a misconception.
Downvoted for groundless assumptions and for failing to google the basics. There are more, not fewer physical documents produced, because it is easier to produce them, the number of bank tellers has actually increased, etc.
Name three.
Actually I think he may be right, since this is basically a consequence of Moravec’s paradox.
But why might this be so?
I agree with the theory, but not with the practical conclusions. Yes, we invented automatic provers before automatic gardeners, because there are were some harder problems involved. But at this point they both seem with reach—for example, take a look at progress in self-driving cars just over the last 6 years. Driving is very similar to other natural problems. Cooking example is just silly in the first place. Well, maybe creative cooking will be harder, but cooking by emulation should be really easy for machines.
… not to mention that a lot of food is being produced by machines, or at least in a heavily automated environment. I’ve visited food factories, I don’t remember seeing many cooks. Even home cooking has been automated to a certain degree.
All these labor saving devices, even factories, are integrated by humans. While the productivity per worker skyrockets (fewer workers needed per X units of output), there is no factory that runs without people who do generally very easy tasks that are very difficult to automate.
The summer after high school I worked in a spray bottle factory. Yes, we made the spray nozzles like come on a bottle of windex. My job was to keep the bins full of the little parts that fed into the machine that assembled them. I also helped unload the boxes of the parts from the carts and stacked them near the bin where they needed to go. Someone else somewhere had a job to handle the “raw” plastic for machine that melted and molded the parts I needed. Someone else put the different parts in different boxes sorted for the cart driver.
These tasks were of course absurdly easy for any human to do with about five minutes of training. Somehow automating all this together into a single factory chain would have presented enormous challenges though. Because the labor is so cheap I could easily imagine that factory will run the same way for the next fifty years.
I suspect the driving forces behind automating that sort of thing will ultimately be, not labor costs, but the relative slowness, messiness, and unreliability of humans.
That said, I also expect that the technology that can do those sorts of jobs more quickly, cleanly, and reliably than humans will be developed for different applications where minimally trained human labor just isn’t practical (say, automated underwater mining) and then applied to other industries once it’s gotten pretty good.
Maybe but its still easily 50 years away. People are “messy” but they are so cheap and you need so few of them—there is no capital tied up in them at all its just a month-to-month expense. Even if you lease equipment you are still paying for the cost of the capital tied up in it. The diminishing returns for automating such a small cost will ensure its continuity for quite some time I think.
Predictions in years are less and less meaningful to me as I go along. I’d give .6 confidence that we’re no more than 5 tech-generations away from being able to build a fully automated mining facility (just to pick a concrete example), and no more than 3 generations from there to being able to build one in a way that would be cost-effective (given current-day labor costs and raw materials prices) for at least some application… perhaps underwater mining of rare earths.
I also expect that along the way, selected raw materials prices will increase enough (in inflation-adjusted currency) that using current-day prices is absurdly conservative. Then again, I also expect that along the way we’ll see several failures of such equipment that cause as much as half a commute-year (current-day) of environmental damage, which might set the whole project back by decades. So, who knows?
A commute-year, incidentally, is a measure of risk (e.g., death and property/environmental damage) equal to that caused by people commuting to and from their jobs in a given year. My guess is that half a commute-year is typically more than enough to cause the majority of Americans to insist that a new project is way too dangerous to even consider. (Of course, that doesn’t apply to the project of actually commuting to work.)
1 Maybe I should clarify: Are the tasks previously done by bank tellers becoming automated? Yes. The fact that the number bank tellers has increased does not invalidate my statement. If there were no internet banking or ATMs then increase would be much larger right? So its trivial to see that the number of bank tellers can increase at the same time as bank teller jobs are lost to automated systems.
2 I’ll give you an extreme one. I am a few steps away of earning a degree in theoretical physics specializing in quantum information theory. Theoretical quantum information theory is nothing but symbol manipulation in a framework on existing theorems of linear algebra. With enough resources pretty much all of the research could be done by computers alone. Algorithms could in principle put mathematical statements together, other algorithms testing the meaningfulness of the output and so on.. but that a discussion interesting enough to have its own thread. I just mean that theoretical work is not immune to automation.
Organize all the known mathematics and physics of 1915 in a computer running the right algorithms, the ask it: ‘what is gravity?’ Would it output General theory of relativity? I think so.
You have fallen victim to the hindsight bias. The parameter space of the ways of reconciling Special Relativity with Newtonian gravity is quite large, even assuming that this goal would have occurred to anyone but Einstein at that time (well, Hilbert did the math independently, after communicating with Einstein for some time). Rejecting the implicit and unquestionable idea of a fixed background spacetime was an extreme leap of genius. The “right algorithms” would probably have to be the AGI-level ones.
“Theoretical quantum information theory” is math, not a natural science, and math is potentially easier to automate. Still, feel free to research the advances in automated theorem proving, and, more importantly, in automated theorem stating, a much harder task. How would a computer know what theorems are interesting?
1 Hindsight bias? Quite a diagnosis there. I never specified the level of those algorithms.
2 Which part of theoretical physics is not math? Experiments confirm or reject theoretical conclusions and points theoretical work in different directions. But that theoretical work is in the end symbol processing—something that computers are pretty good at. There could be a variety of ways for a computer to decide if a theorem is interesting just as for a human. Scope, generality and computability of the theorems could be factors. Input Newtonian mechanics and the mathematics of 1850 and output Hamiltonian mechanics just based on the generality of that framework.
I have, in my reply: probably AGI-level, i.e. too far into the haze of the future to be considered seriously.
Probably the 1% that counts the most (I agree, 99% of theoretical physics is math, as I found out the hard way). It’s finding the models that make the old experiments make sense and that make new interesting predictions that turn out to be right that is the mysterious part. How would you program a computer that can decide, on its own, that adding the gauge freedom to the Maxwell equations would actually make them simpler and lay foundations for nearly all of modern high-energy physics? That the Landau pole is not an insurmountable obstacle, despite all the infinities? That 2D models like graphene are worth studying? That can resolve the current mysteries, like the High Tc superconductivity, the still mysterious foundations of QM, the cosmological mysteries of dark matter and dark energy, the many problems in chemistry, biology, society etc.? Sure, it is all “symbol manipulation”, but so is everything humans do, if you agree that we are (somewhat complicated) Turing machines and Markov chains. If you assert that it is possible to do all this with anything below an AGI-level complexity, I hope that you are right, but I am extremely skeptical.
Mathematicians would probably call much less of what physicists do “math” than the physicists. Let me focus on statistical mechanics. A century ago, physicists made assertions that mathematicians could understand, like the central limit theorem and ergodicity. There was debate about whether these were mathematical or physical truths, but it is fine to take them as assumptions and do mathematics. This happens today with spin glasses. But physicists also talk about universality. I suppose that’s a precise claim, though rather strong, but the typical prototype of a universality class is a conformal field theory and mathematicians can’t make heads or tails of that. The calculations about CFT may look like math, but the rules aren’t formal.
PS—bank tellers per capita fell from 1998 to 2008, though not much.
I agree that the conceptual (non-simply-symbol-processing) part of theoretical physics is the tricky part to automate, and even if I am willing to accept that that last 1% will be kept in the monopoly of human beings, but then that’s it; theoretical physics will asymptotically reduce to that 1% and stay there until AGI arrives. Its not bound to change over night, but the change will be the product of many small changes where computers start to aid us not by just doing the calculations and simulations but more advanced tasks where we can input sets of equations from two different sub-field and letting the computers using evolutionary algorithms try different combinations, operate on them and so on and find links. The process could end where a joint theory in a common mathematical framework succeeds to derive the phenomena in both sub fields.
EDIT: Have to add that it feels a bit awkward to argue against the future necessity of my “profession”..
Are you a grad student? Because I don’t know much about theoretical physics, but I find it very hard to believe much academic research could be automated.
I’m a post-doc doing research on computational linguistics. I can’t imagine automating my work.
Yes I am, and I’ll soon start looking for PhD-positions either in physics or some interdisciplinary field of interest. I know I seem a bit over-optimistic, and that such radical changes may take maybe at least 30-50 years, but I’d guess most of us will be alive by then so its still relevant. My main point is that step by step theoretical tasks will move to the space of computation and the job of the theoretician will evolve to something else. If one day our computers in our computer aided research starts to output suggestions for models, or links between sets of data we haven’t thought about comparing wouldn’t those results actually be a collaboration between us and that system? You maybe cant imagine automating everything you do, but I’m sure you can imagine parts of your research being automated. That would allow you to use more mental resources for the conceptual and creative part of the research and so on..
You can’t. Most can’t, probably.
You have to come with some good reasons WHY that would be impossible.
Well I’m working on a sub-problem of artificial intelligence. So the task of designing a system to do my work is harder than the end-goal my work is trying to achieve!
When do you believe the first human-equivalent GAI will be created?
By human-equivalent i’d guess you mean equivalent in if not all, but in many different aspects of human intelligence. I wouldn’t dare to have an opinion at the moment.
Anyone else?
Do you have any idea how to do it? What does organising the mathematics and physics actually look like? What are the right algorithms?
Nobody probably doubts that theoretical work can be done by machines. But your original claim was stronger: that these tasks are “fairly easy to represent virtually” and that the conjecture that they are going to be the last to go is a misconception.