I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.