Seems like an issue of code/data segmentation. Programs can contain compile time constants, and you could turn a neural network into a program that has compile time constants for the weights, perhaps “distilling” it to reduce the total size, perhaps even binarizing it.
Arguably, video games aren’t entirely software by this standard, because they use image assets.
Formally segmenting “code” from “data” is famously hard because “code as data” is how compilers work and “data as code” is how interpreters work. Some AI techniques involve program synthesis.
I think the relevant issue is copyright more than the code/data distinction? Since code can be copyrighted too.
While I agree that wedding photos and NN weights are both data, and this helps to highlight ways they “aren’t software”, I think this undersells the point. NN weights are “active” in ways wedding photos aren’t. The classic code/data distinction has a mostly-OK summary: code is data of type function. Code is data which can be “run” on other data.
NN weights are “of type function” too: the usual way to use them is to “run” them. Yet, it is pretty obvious that they are not code in the traditional sense.
So I think this is similar to a hardware geek insisting that code is just hardware configuration, like setting a dial or flipping a set of switches. To the hypothetical hardware geek, everything is hardware; “software” is a physical thing just as much as a wire is. An arduino is just a particularly inefficient control circuit.
So, although from a hardware perspective you basically always want to replace an arduino with a more special-purpose chip, “something magical” happens when we move to software—new sorts of things become possible.
Similarly, looking at AI as data rather than code may be a way to say that AI “isn’t software” within the paradigm of software, but it is not very helpful for understanding the large shift that is taking place. I think it is better to see this as a new layer in somewhat the same way as software was a new layer on top of hardware. The kinds of thinking you need to do in order to do something with hardware vs do something with software are quite different, but ultimately, more similar to each other than they both are to how to do something with AI.
Seems like an issue of code/data segmentation. Programs can contain compile time constants, and you could turn a neural network into a program that has compile time constants for the weights, perhaps “distilling” it to reduce the total size, perhaps even binarizing it.
Arguably, video games aren’t entirely software by this standard, because they use image assets.
Formally segmenting “code” from “data” is famously hard because “code as data” is how compilers work and “data as code” is how interpreters work. Some AI techniques involve program synthesis.
I think the relevant issue is copyright more than the code/data distinction? Since code can be copyrighted too.
While I agree that wedding photos and NN weights are both data, and this helps to highlight ways they “aren’t software”, I think this undersells the point. NN weights are “active” in ways wedding photos aren’t. The classic code/data distinction has a mostly-OK summary: code is data of type function. Code is data which can be “run” on other data.
NN weights are “of type function” too: the usual way to use them is to “run” them. Yet, it is pretty obvious that they are not code in the traditional sense.
So I think this is similar to a hardware geek insisting that code is just hardware configuration, like setting a dial or flipping a set of switches. To the hypothetical hardware geek, everything is hardware; “software” is a physical thing just as much as a wire is. An arduino is just a particularly inefficient control circuit.
So, although from a hardware perspective you basically always want to replace an arduino with a more special-purpose chip, “something magical” happens when we move to software—new sorts of things become possible.
Similarly, looking at AI as data rather than code may be a way to say that AI “isn’t software” within the paradigm of software, but it is not very helpful for understanding the large shift that is taking place. I think it is better to see this as a new layer in somewhat the same way as software was a new layer on top of hardware. The kinds of thinking you need to do in order to do something with hardware vs do something with software are quite different, but ultimately, more similar to each other than they both are to how to do something with AI.