Ok, so please note I do work in the field. This doesn’t mean I know everything, and I could be wrong, but I have some knowledge, much of which is under NDA.
There are many levels of similarity.
From the platform level—the platform is the nn accelerator chips, all the support electronics, the RTOS, the drivers, the interfaces, and a host of other software tools—there is zero difference between AI systems at all. The platform’s role is to take an NN graph, usually defined as a *.onnx file, and to run that graph with deterministic timing, using inputs from many sensors which there have to be device drivers for.
So that’s one part of the platforming—everyone deploying any kind of autonomy system will need to purchase platforms to run it on. (and there will be only a few good enough for real time tasks where safety is a factor)
From the network architecture level, again, there are many similarities. In addition, networks that solve problems in the same class can often share the same architecture. For example, 2 networks that just identify images from a dataset can be very similar in architecture even if the datasets have totally different members.
There are technical reasons why you want to use an existing, ‘known to work’ architecture, a main one being that novel architectures will take a lot more work to run in real time on your target accelerator platform.
For different tasks that involve physical manipulations of objects in the real world , I expect there will be many similarities even if robots are doing different tasks.
Just a few : perception networks need to be similar, segmentation networks need to be similar, networks that predict how realworld objects will move, that predict damage, that predict what humans may do, that predict where an optimal path might be found, and so on and so forth.
I expect there will be far more similarities than differences.
In addition, even when the network weights are totally different, using the same software and network and platform architecture means that you can share code and you merely have to repeat training on a different dataset. Example: GPT-3 trained on a different language.
Hmm, just because the abstract form of your algorithm is the same as everyone else’s, this doesn’t mean you can reuse the same algorithm… In some sense, it’s trivial that abstract form of all algorithms is the same: [inputs] → [outputs]. But this doesn’t mean the same algorithm can be reused to solve all the problems.
This is incorrect. You’re also not thinking abstractly enough—you’re thinking what we see today, where AI systems are not platformed and are just a mess of python code defining some experimental algorithm. (eg Open AI’s examples). This isn’t production grade or reusable and it has to be or it will not be economical to use.
Ok, so please note I do work in the field. This doesn’t mean I know everything, and I could be wrong, but I have some knowledge, much of which is under NDA.
There are many levels of similarity.
From the platform level—the platform is the nn accelerator chips, all the support electronics, the RTOS, the drivers, the interfaces, and a host of other software tools—there is zero difference between AI systems at all. The platform’s role is to take an NN graph, usually defined as a *.onnx file, and to run that graph with deterministic timing, using inputs from many sensors which there have to be device drivers for.
So that’s one part of the platforming—everyone deploying any kind of autonomy system will need to purchase platforms to run it on. (and there will be only a few good enough for real time tasks where safety is a factor)
From the network architecture level, again, there are many similarities. In addition, networks that solve problems in the same class can often share the same architecture. For example, 2 networks that just identify images from a dataset can be very similar in architecture even if the datasets have totally different members.
There are technical reasons why you want to use an existing, ‘known to work’ architecture, a main one being that novel architectures will take a lot more work to run in real time on your target accelerator platform.
For different tasks that involve physical manipulations of objects in the real world , I expect there will be many similarities even if robots are doing different tasks.
Just a few : perception networks need to be similar, segmentation networks need to be similar, networks that predict how realworld objects will move, that predict damage, that predict what humans may do, that predict where an optimal path might be found, and so on and so forth.
I expect there will be far more similarities than differences.
In addition, even when the network weights are totally different, using the same software and network and platform architecture means that you can share code and you merely have to repeat training on a different dataset. Example: GPT-3 trained on a different language.
Hmm, just because the abstract form of your algorithm is the same as everyone else’s, this doesn’t mean you can reuse the same algorithm… In some sense, it’s trivial that abstract form of all algorithms is the same: [inputs] → [outputs]. But this doesn’t mean the same algorithm can be reused to solve all the problems.
This is incorrect. You’re also not thinking abstractly enough—you’re thinking what we see today, where AI systems are not platformed and are just a mess of python code defining some experimental algorithm. (eg Open AI’s examples). This isn’t production grade or reusable and it has to be or it will not be economical to use.