The bits about synthetic intelligence mostly seem rather naive—and they seem out of place amidst the long rants about Jesus, Nazis and the Jews. However, a few things are expressed neatly. For example, I liked:
“When it dawns on the most farsighted people that that this technology is the future and whoever builds the first AI could potentially determine the future of the human race, a fierce struggle to be first will obsess certain governments, individuals, businesses, organizations, and otherwise.”
However, such statements do really need to be followed by saying that Google wasn’t the first search engine, and that Windows wasn’t the first operating system. Being first often helps—but it isn’t everything.
However, you do need to go on to say that Google wasn’t the first search engine, Windows wasn’t the first operating system. Being first helps, but it isn’t everything.
This is precisely the wrong time to apply outside view thinking without considering the reasoning in depth. That isn’t an appropriate reference class. The ‘first takes all’ reasoning you just finished quoting obviously doesn’t apply to search engines. It wouldn’t be a matter of “going on to say”, it would be “forget this entirely and say...”
I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
The bits about synthetic intelligence mostly seem rather naive—and they seem out of place amidst the long rants about Jesus, Nazis and the Jews. However, a few things are expressed neatly. For example, I liked:
“When it dawns on the most farsighted people that that this technology is the future and whoever builds the first AI could potentially determine the future of the human race, a fierce struggle to be first will obsess certain governments, individuals, businesses, organizations, and otherwise.”
However, such statements do really need to be followed by saying that Google wasn’t the first search engine, and that Windows wasn’t the first operating system. Being first often helps—but it isn’t everything.
This is precisely the wrong time to apply outside view thinking without considering the reasoning in depth. That isn’t an appropriate reference class. The ‘first takes all’ reasoning you just finished quoting obviously doesn’t apply to search engines. It wouldn’t be a matter of “going on to say”, it would be “forget this entirely and say...”
Computer software seems like an appropriate “reference class” for other computer software to me.
The basic idea is that developing toddler technologies can sometimes be overtaken by other toddlers that develop and mature faster.
Superficial similarities do scary things to people’s brains.
I can see you think that this is a bad analogy. However, what isn’t so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
“This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion.”
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari’s go really fast and even then they don’t explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
On a related note…
… You aren’t allergic to peanuts I hope!
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can’t say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn’t seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time—effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large—but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools—and so on.
My expectation is that humans will want code reviews for quite a while—so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens—and that is mostly where I think there might be an interesting race—rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don’t yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.