I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
On a related note…
… You aren’t allergic to peanuts I hope!
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.