Definitely agreed. AI is software, but not all software is the same and it doesn’t all conform to any particular expectations.
I did also double-take a bit at “When photographs are not good, we blame the photographer, not the software running on the camera”, because sometimes we do quite reasonably blame the software running on the camera. It often doesn’t work as designed, and often the design was bad in the first place. Many people are not aware of just how much our cameras automatically fabricate images these days, and present an illusion of faithfully capturing a scene. There are enough approximate heuristics in use that nobody can predict all the interactions with the inputs and each other that will break that illusion.
A good photographer takes a lot of photos with good equipment in ways that are more likely to give better results than average. If all of the photos are bad, then it’s fair to believe that the photographer is not good. If a photograph of one special moment is not good, then it can easily be outside the reasonable control of the photographer and one possible cause can be software behaving poorly. If it’s known in advance that you only get one shot, a good photographer will have multiple cameras on it.
I’m asking the question of how we should think about the systems, and claiming “software” is very much the wrong conceptual model. Yes, AI can work poorly because of a software issue, for example, timeouts with the API, or similar. But the thing we’re interested in discussing is the AI, not the software component—and as you point out in the case of photography, the user’s skill with the software, and with everything else about taking photographs, is something that occurs and should be discussed not in terms of the software being used.
Definitely agreed. AI is software, but not all software is the same and it doesn’t all conform to any particular expectations.
I did also double-take a bit at “When photographs are not good, we blame the photographer, not the software running on the camera”, because sometimes we do quite reasonably blame the software running on the camera. It often doesn’t work as designed, and often the design was bad in the first place. Many people are not aware of just how much our cameras automatically fabricate images these days, and present an illusion of faithfully capturing a scene. There are enough approximate heuristics in use that nobody can predict all the interactions with the inputs and each other that will break that illusion.
A good photographer takes a lot of photos with good equipment in ways that are more likely to give better results than average. If all of the photos are bad, then it’s fair to believe that the photographer is not good. If a photograph of one special moment is not good, then it can easily be outside the reasonable control of the photographer and one possible cause can be software behaving poorly. If it’s known in advance that you only get one shot, a good photographer will have multiple cameras on it.
I’m asking the question of how we should think about the systems, and claiming “software” is very much the wrong conceptual model. Yes, AI can work poorly because of a software issue, for example, timeouts with the API, or similar. But the thing we’re interested in discussing is the AI, not the software component—and as you point out in the case of photography, the user’s skill with the software, and with everything else about taking photographs, is something that occurs and should be discussed not in terms of the software being used.