I’m not certain I follow your intent with that example, but I don’t think it breaks any category boundaries.
The process using some algorithm to find your face is software. It has data (a frame of video) as input, and data (coordinates locating a face) as output. The facial recognition algorithm itself was maybe produced using training data and a learning algorithm (software).
There’s then some more software which takes that data (the frame of video and the coordinates) and outputs new data (a frame of video with a rectangle drawn around your face).
It is frequently the role of software to transform one type of data into another. Even if data is bounced rapidly through several layers of software to be turned into different intermediary or output data, there’s still a conceptual separation between “instructions to be carried out” versus “numbers that those instructions operate on”.
I’m not saying that I can force breaking of category boundaries, I’m asking whether the categories are actually useful for thinking about the systems. I’m saying it isn’t, and we need to stop trying to use categories in this way.
And your reply didn’t address they key point—is the thing that controls the body being shown in the data being transmitted software, or data? And parallel to that, is the thing that controls the output of the AI system software or data?
Oh I see (I think) - I took “my face being picked up by the camera” to mean the way the camera can recognise and track/display the location of a face (thought you were making a point about there being a degree of responsiveness and mixed processing/data involved in that), rather than the literal actual face itself.
A camera is a sensor gathering data. Some of that data describes the world, including things in the world, including people with faces. Your actual face is indeed neither software nor data: it’s a physical object. But it does get described by data. “The thing controlling” your body would be your brain/mind, which aren’t directly imaged by the camera to be included as data, but can be inferred from it.
So are you suggesting we ought to understand the AI like an external object that is being described by the data of its weights/algorithms rather than wholly made of that data, or as a mind that we infer from the shadow cast on the cave wall?
I can see that being a useful abstraction and level of description, even if it’s all implemented in lower-level stuff; data and software being the mechanical details of the AI in the same way that neurons squirting chemicals and electrical impulses at each other (and below that, atoms and stuff) are the mechanical details of the human.
Although, I think “humans aren’t atoms” could still be a somewhat ambiguous statement—would want to be sure it gets interpreted as “we aren’t just atoms, there are higher levels of description that are more useful for understanding us” rather than “humans are not made of atoms”. And likewise for the AI at the other end of the analogy.
Yes, I think you’re now saying something akin to what I was trying to say. The AI, as a set of weights and activation funtions, is a different artifact than the software being used to multiply the matrices, much less the program used to output the text. (But I’m not sure this is quite the same as a different level of abstraction, the way humans versus atoms are—though if we want to take that route, I think gjm’s comment about humans and chemistry makes this clearer.)
I’m not certain I follow your intent with that example, but I don’t think it breaks any category boundaries.
The process using some algorithm to find your face is software. It has data (a frame of video) as input, and data (coordinates locating a face) as output. The facial recognition algorithm itself was maybe produced using training data and a learning algorithm (software).
There’s then some more software which takes that data (the frame of video and the coordinates) and outputs new data (a frame of video with a rectangle drawn around your face).
It is frequently the role of software to transform one type of data into another. Even if data is bounced rapidly through several layers of software to be turned into different intermediary or output data, there’s still a conceptual separation between “instructions to be carried out” versus “numbers that those instructions operate on”.
I’m not saying that I can force breaking of category boundaries, I’m asking whether the categories are actually useful for thinking about the systems. I’m saying it isn’t, and we need to stop trying to use categories in this way.
And your reply didn’t address they key point—is the thing that controls the body being shown in the data being transmitted software, or data? And parallel to that, is the thing that controls the output of the AI system software or data?
Oh I see (I think) - I took “my face being picked up by the camera” to mean the way the camera can recognise and track/display the location of a face (thought you were making a point about there being a degree of responsiveness and mixed processing/data involved in that), rather than the literal actual face itself.
A camera is a sensor gathering data. Some of that data describes the world, including things in the world, including people with faces. Your actual face is indeed neither software nor data: it’s a physical object. But it does get described by data. “The thing controlling” your body would be your brain/mind, which aren’t directly imaged by the camera to be included as data, but can be inferred from it.
So are you suggesting we ought to understand the AI like an external object that is being described by the data of its weights/algorithms rather than wholly made of that data, or as a mind that we infer from the shadow cast on the cave wall?
I can see that being a useful abstraction and level of description, even if it’s all implemented in lower-level stuff; data and software being the mechanical details of the AI in the same way that neurons squirting chemicals and electrical impulses at each other (and below that, atoms and stuff) are the mechanical details of the human.
Although, I think “humans aren’t atoms” could still be a somewhat ambiguous statement—would want to be sure it gets interpreted as “we aren’t just atoms, there are higher levels of description that are more useful for understanding us” rather than “humans are not made of atoms”. And likewise for the AI at the other end of the analogy.
Yes, I think you’re now saying something akin to what I was trying to say. The AI, as a set of weights and activation funtions, is a different artifact than the software being used to multiply the matrices, much less the program used to output the text. (But I’m not sure this is quite the same as a different level of abstraction, the way humans versus atoms are—though if we want to take that route, I think gjm’s comment about humans and chemistry makes this clearer.)