If anyone is curious about my stance on this now that I have reproduced:
My baby is cuter than most babies. Some people who are not related to him have agreed with me on this but there is probably still bias in the sample. He does have traits I have always considered advantageous in babies generally or desirable in mine specifically though.
He is very difficult to photograph well. He gets distracted by the camera and moves at inopportune times. Wildlife photographers probably have solutions to similar wildlife-related issues and maybe pro baby photographers do too. I don’t know how this affects image quality ratios in Google results.
My baby is much more appealing as a process than a snapshot. He is soft and squishy and warm in addition to being nice to stare at, and has learned to smile and laugh in response to things we do, and he is endearingly incompetent at many tasks he attempts. Some animals can do that sort of thing too though.
I still think it’s suspect that the cuteness response fires strongly in response to bunnies etc., but I may have stacked the deck more than I would have if I had known more at the time.
My dad has a shoot-in-bursts feature on his phone which seems neat but I barely use my phone enough to justify having it, let alone replacing it. We’ve gotten some irregular good photographs of him (one person who sometimes comes over to help is particularly good at this).
No, you wouldn’t. Cameras do anti-shake (image stabilization) very very well these days. With certain cameras people get sharp images from multi-second (!) hand-held exposures. For kids, the subject movement will be the determining factor, your hands can shake all they want.
Making explicit something implicit in Lumifer’s comment: children move a lot and image stabilization won’t do anything about that[1], so with an image-stabilized camera (and perhaps even without) the only way to avoid motion blur is to reduce the exposure time. This in turn requires you to get more photons to the sensor per unit time, which requires a physically larger camera. Smartphone cameras are incredibly impressive these days given the constraints they work under, but a good “real” camera can take in a whole lot more light than the camera in any phone, which will mean shorter exposures and hence sharper kid pictures.
[1] Though, hmm, I wonder whether it would be possible to make a camera that identifies subjects and how they’re moving—this is already done for autofocus—and then uses the image stabilization machinery to keep the subject as motionless as possible in the image. That would be startling but isn’t obviously impossible. (If the subject moves too much, obviously it’s hopeless.)
[EDITED to add: For the avoidance of doubt, I am 100% confident that Lumifer already knows all that, with the possible exception of the idea in the footnote, and 95% confident that you understood it all from what he said; this is for the sake of that last 5%.]
[EDITED again to add:] Pretty sure the idea in the footnote isn’t really workable. Autofocus tracks subject movement between photos. This would require watching within a single image capture, which implies either taking lots of short-exposure shots instead of a single longer one (implying more readout noise) or else having a separate sensor used only for this (but unless a lot of the light is getting diverted to that separate sensor it’s going to be seeing super-noisy images which can’t be good for its ability to track subjects). Also, this seems quite expensive computationally.
This in turn requires you to get more photons to the sensor per unit time, which requires a physically larger camera
Yes, but there is one other way besides getting a bigger sensor—get brigher lenses. One f-stop difference gives you twice as many photons.
As to your idea, it might be more workable than you think :-)
Autofocus tracks subject movement between photos
You are assuming an SLR and that’s not the only choice nowadays. Mirrorless cameras have their sensor open all the time and read it continuously (plus some have specific autofocus sensels embedded into the main sensor).
Besides, continuous AF already tries to predict the subject movement. It’s not a big stretch to to apply it to IS as well.
There is the issue of what to track, but tracking the eyes seems like a reasonable default and eye identification already exists in consumer cameras (it’s used to maintain the focus on the eyes).
The big issue is that IS is very limited in the magnitude of movement it can compensate for and for large shifts you will need to move the whole camera (using something like an autopanning tripod head that FOOMed).
All in all, some kind of “subject movement compensation assist” seems technically possible. But at consumer level, probably not before Alicorn’s kid grows up.
Oh yes, very much so. But the brighter lenses, again, require non-smartphone cameras. (Not necessarily SLRs, of course.)
You are assuming an SLR
I wasn’t, I promise.
have their sensor open all the time and read it continuously
Open all the time, yes. Continuously, not so much so far as I know. The processing is separate from the sensor, and there’s a readout process that amounts to capturing an image from light falling on the sensor during a given period.[1] Hmm, if readout and reset are separate (which I think they generally are) then I suppose you can capture shorter “subframes” without disturbing the capture of a longer frame within which they occur. That was an error on my part, but it wasn’t the result of assuming an SLR camera. I still worry that getting the information needed would require very short (and therefore noisy) subframes, and that that would interfere with accurate tracking. But I haven’t done the obvious experiments to see what the images would be likely to look like.
continuous AF already tries to predict the subject movement.
I’m not sure why you’re telling me this, since I already said exactly the same thing in the comment you were replying to, and the whole point of my proposal was to make use of the subject-motion-tracking already implemented for AF to enable the IS mechanism to compensate for subject motion.
tracking the eyes
Yes, though of course that fails if the subject’s eyes happen not to be in shot, or if the subject is something without eyes, or if they’re too small in the image to track well (if this turns out to be feasible, bird photographers will love it—though possibly birds move too fast). AF can do pretty well at tracking subjects even if they don’t have visible eyes; I assume this system would use essentially the same techniques. (Track whatever high-contrast features happen to be visible in the right places, I guess.)
very limited in the magnitude of movement it can compensate for
Yes (that was my point about it being hopeless if the subject moves too much). But we’re talking here (or at least I am) about movement within a single image-capture, and the point is simply to extend the range of acceptable exposure times. If a sharp image requires that your child not move more than a pixel or two, and if you have an IS system that can move the sensor by 100 pixels[2], and—this is the tricky bit—if this hypothetical system can predict the child’s movement well enough—then you can get a sharp image with an exposure 50x longer than without the system. In practice it would not be nearly that good, of course. (The same argument applies to the use of IS to mitigate hand movement, but even the best IS systems don’t deliver a 6-stop improvement. And they have the advantage of being able to use accelerometers to measure how the camera is moving rather than depending on analysing previous image captures.) If we’re talking about improvements of a stop or two, then the max displacement of the IS mechanism would probably not be the limiting factor.
[1] For “rolling-shutter” sensors, the relevant region in spacetime is “sheared” :-).
[2] I haven’t looked hard for this information, but one thing found by desultory googling is that the sensor-shift IS on the (now some years old) Pentax K-7 SLR can move the sensor by about 1mm. The horizontal size of its sensor is about 24mm and 5000 pixels, so 1mm is about 200px.
Cameras with an electronic viewfinder have to update it with a reasonable refresh rate, if the AF is set to continuous it’s updated in real time as long you half-press the shutter button, exposure/histogram is also updated in real time. The issue is basically how high a frequency can it do.
I’m not sure why you’re telling me this
The key word is “predict”. If you are confident of your prediction, you can do an exposure without measuring anything while it’s in process.
simply to extend the range of acceptable exposure times
Well, there clearly would be a lot of trade-offs involved. An obvious one is that if you e.g. pan the sensor to keep the eyes sharp, all the motionless elements in the image would get smudged. That might work fine for a particular picture, but it is a specific look.
even the best IS systems don’t deliver a 6-stop improvement
They do now. The latest Olympus—EM-1 Mark2 -- claims to do 5.5 stops just with body IS and if you add lens IS that it can talk to (not sure there are more lens that can do that besides the 12-100mm) it goes up to 6.5 stops.
If anyone is curious about my stance on this now that I have reproduced:
My baby is cuter than most babies. Some people who are not related to him have agreed with me on this but there is probably still bias in the sample. He does have traits I have always considered advantageous in babies generally or desirable in mine specifically though.
He is very difficult to photograph well. He gets distracted by the camera and moves at inopportune times. Wildlife photographers probably have solutions to similar wildlife-related issues and maybe pro baby photographers do too. I don’t know how this affects image quality ratios in Google results.
My baby is much more appealing as a process than a snapshot. He is soft and squishy and warm in addition to being nice to stare at, and has learned to smile and laugh in response to things we do, and he is endearingly incompetent at many tasks he attempts. Some animals can do that sort of thing too though.
I still think it’s suspect that the cuteness response fires strongly in response to bunnies etc., but I may have stacked the deck more than I would have if I had known more at the time.
Advice: get a camera that focuses quickly (most point-and-shoots and all smartphones don’t), can shoot in bursts, and has or can take bright lenses.
My dad has a shoot-in-bursts feature on his phone which seems neat but I barely use my phone enough to justify having it, let alone replacing it. We’ve gotten some irregular good photographs of him (one person who sometimes comes over to help is particularly good at this).
Taking pictures of kids is a technically demanding thing. If you want good images consistently, you’ll have to buy an actual photo camera :-/
I would first have to get steadier hands.
No, you wouldn’t. Cameras do anti-shake (image stabilization) very very well these days. With certain cameras people get sharp images from multi-second (!) hand-held exposures. For kids, the subject movement will be the determining factor, your hands can shake all they want.
Making explicit something implicit in Lumifer’s comment: children move a lot and image stabilization won’t do anything about that[1], so with an image-stabilized camera (and perhaps even without) the only way to avoid motion blur is to reduce the exposure time. This in turn requires you to get more photons to the sensor per unit time, which requires a physically larger camera. Smartphone cameras are incredibly impressive these days given the constraints they work under, but a good “real” camera can take in a whole lot more light than the camera in any phone, which will mean shorter exposures and hence sharper kid pictures.
[1] Though, hmm, I wonder whether it would be possible to make a camera that identifies subjects and how they’re moving—this is already done for autofocus—and then uses the image stabilization machinery to keep the subject as motionless as possible in the image. That would be startling but isn’t obviously impossible. (If the subject moves too much, obviously it’s hopeless.)
[EDITED to add: For the avoidance of doubt, I am 100% confident that Lumifer already knows all that, with the possible exception of the idea in the footnote, and 95% confident that you understood it all from what he said; this is for the sake of that last 5%.]
[EDITED again to add:] Pretty sure the idea in the footnote isn’t really workable. Autofocus tracks subject movement between photos. This would require watching within a single image capture, which implies either taking lots of short-exposure shots instead of a single longer one (implying more readout noise) or else having a separate sensor used only for this (but unless a lot of the light is getting diverted to that separate sensor it’s going to be seeing super-noisy images which can’t be good for its ability to track subjects). Also, this seems quite expensive computationally.
Yes, but there is one other way besides getting a bigger sensor—get brigher lenses. One f-stop difference gives you twice as many photons.
As to your idea, it might be more workable than you think :-)
You are assuming an SLR and that’s not the only choice nowadays. Mirrorless cameras have their sensor open all the time and read it continuously (plus some have specific autofocus sensels embedded into the main sensor).
Besides, continuous AF already tries to predict the subject movement. It’s not a big stretch to to apply it to IS as well.
There is the issue of what to track, but tracking the eyes seems like a reasonable default and eye identification already exists in consumer cameras (it’s used to maintain the focus on the eyes).
The big issue is that IS is very limited in the magnitude of movement it can compensate for and for large shifts you will need to move the whole camera (using something like an autopanning tripod head that FOOMed).
All in all, some kind of “subject movement compensation assist” seems technically possible. But at consumer level, probably not before Alicorn’s kid grows up.
Oh yes, very much so. But the brighter lenses, again, require non-smartphone cameras. (Not necessarily SLRs, of course.)
I wasn’t, I promise.
Open all the time, yes. Continuously, not so much so far as I know. The processing is separate from the sensor, and there’s a readout process that amounts to capturing an image from light falling on the sensor during a given period.[1] Hmm, if readout and reset are separate (which I think they generally are) then I suppose you can capture shorter “subframes” without disturbing the capture of a longer frame within which they occur. That was an error on my part, but it wasn’t the result of assuming an SLR camera. I still worry that getting the information needed would require very short (and therefore noisy) subframes, and that that would interfere with accurate tracking. But I haven’t done the obvious experiments to see what the images would be likely to look like.
I’m not sure why you’re telling me this, since I already said exactly the same thing in the comment you were replying to, and the whole point of my proposal was to make use of the subject-motion-tracking already implemented for AF to enable the IS mechanism to compensate for subject motion.
Yes, though of course that fails if the subject’s eyes happen not to be in shot, or if the subject is something without eyes, or if they’re too small in the image to track well (if this turns out to be feasible, bird photographers will love it—though possibly birds move too fast). AF can do pretty well at tracking subjects even if they don’t have visible eyes; I assume this system would use essentially the same techniques. (Track whatever high-contrast features happen to be visible in the right places, I guess.)
Yes (that was my point about it being hopeless if the subject moves too much). But we’re talking here (or at least I am) about movement within a single image-capture, and the point is simply to extend the range of acceptable exposure times. If a sharp image requires that your child not move more than a pixel or two, and if you have an IS system that can move the sensor by 100 pixels[2], and—this is the tricky bit—if this hypothetical system can predict the child’s movement well enough—then you can get a sharp image with an exposure 50x longer than without the system. In practice it would not be nearly that good, of course. (The same argument applies to the use of IS to mitigate hand movement, but even the best IS systems don’t deliver a 6-stop improvement. And they have the advantage of being able to use accelerometers to measure how the camera is moving rather than depending on analysing previous image captures.) If we’re talking about improvements of a stop or two, then the max displacement of the IS mechanism would probably not be the limiting factor.
[1] For “rolling-shutter” sensors, the relevant region in spacetime is “sheared” :-).
[2] I haven’t looked hard for this information, but one thing found by desultory googling is that the sensor-shift IS on the (now some years old) Pentax K-7 SLR can move the sensor by about 1mm. The horizontal size of its sensor is about 24mm and 5000 pixels, so 1mm is about 200px.
Cameras with an electronic viewfinder have to update it with a reasonable refresh rate, if the AF is set to continuous it’s updated in real time as long you half-press the shutter button, exposure/histogram is also updated in real time. The issue is basically how high a frequency can it do.
The key word is “predict”. If you are confident of your prediction, you can do an exposure without measuring anything while it’s in process.
Well, there clearly would be a lot of trade-offs involved. An obvious one is that if you e.g. pan the sensor to keep the eyes sharp, all the motionless elements in the image would get smudged. That might work fine for a particular picture, but it is a specific look.
They do now. The latest Olympus—EM-1 Mark2 -- claims to do 5.5 stops just with body IS and if you add lens IS that it can talk to (not sure there are more lens that can do that besides the 12-100mm) it goes up to 6.5 stops.
Wow, okay, I guess that might be worth it. Spouse has a “nice camera” but I don’t know if it does this.
also a little bit of photo taking posture helps a lot.