A look at image stabilization

A long time ago, someone asked for a hypothetical D200 wishlist feature, I asked for simply one: in-camera vibration reduction.

Whatever you call it, this one recent gift from the video world has revolutionized photography by reducing the need for monopods and tripods.

A recent article when Gary “saw the light” about image stabilization in consumer cameras made me think about my wish.

What is image stabilization?

The answer itself depends on the person and the company implementing the solution. So maybe we should define the solutions out there and the company’s approaches.

The oldest form of image stabilization is Canon which brought their technology from the video camera world and calls their system “Image Stabilizer” (IS). The approach is to have a group of optics that “float” in the lens in order to counteract the vibration occuring in the camera. Because of this, I’ll call this class of solutions “optical image stabilization.”

Other proponents of this include Nikon, which calls this “Vibration Reduction” (VR), and Panasonic which call their system “Optical Image Stabilizer (MEGA O.I.S).

In pocket cameras, the examples are the Canon SD700 IS, Nikon Coolpix P4, and Panasonic FX01, as well as the Panasonic LX1, which I own:

Canted snaps

Canted snaps
Patxi’s Pizza, Palo Alto, California

Nikon D70, Nikkor 18-200mm f/3.5-5.6G VR
1/6 sec @ f/5.6, iso 1600, 200mm (300mm)

Wondering how I could handhold a 300mm effective shot at 1/6 of a second? This is Nikon VR (and being able to brace against the table).

Ironically (or perhaps not), the inventors of optical image stabilization were the last to apply it to compact/subcompact cameras.

The major problem with this besides the technology being almost exclusive to #1 and #2 is that it requires sinking a lot of money in lenses. This isn’t a big deal for compact cameras, but it is if you are a Nikon freak like me… You can see why my “wish” was tongue-in-cheek: why would Nikon or Canon put any form of stabilization into the camera body when they already make so much money putting it in the lenses?

I wait for Pizza with my Nikon camera (from side)

I wait for Pizza with my Nikon camera (from side)
Patxi’s Pizza, Palo Alto, CA

Lumix DMC-LX1
1/3 sec @ f/2.8, iso 200, 6mm (28mm)

Wondering how Caitlin could handhold a 28mm shot at 1/3 of a second and no loss of sharpness? This is Panasonic MEGA O.I.S.

If the Lumix lens roadmap is to be believed, we can expect much of the same from Panasonic. (If the numbers seem a bit strange, remember since this is 4/3 system, you double to get the 35mm equivalents. This means that 14-150mm f/3.5-5.6 OIS due out in 2007 is their version of my 18-200mm f/3.5-5.6G VR Nikkor.)

Shake that body!

Well instead of putting a floating optical element to keep the image on the CCD stabile, why not shift the CCD itself to compensate? This was the solution created by Minolta and by extension Konica-Minolta and now Sony. They call it “Anti-Shake” (AS) and it’s been a distinguishing feature of the K-M line from their compact cameras to their digital SLRs (formerly K-M Dynax, Maxxum, or Alpha, now called Sony Alpha).

By the way, Sony isn’t the only one to do this. With the introduction of the Pentax K100D, Pentax brings “Shake Reduction” to their dSLR bodies. Not that this hasn’t appeared in their compact cameras first: a sliding lens system for compactness + CCD shifting in the body for vibration reduction.

While optical image stabilization vs. sensor shifting seems a wash for a pocket camera, this is important for dSLRs with removeable lenses. Why? Well if you put a $100 50mm(ish) Sony Alpha mount or Pentax K-mount lens on one of these anti-shake bodies, you have vibration reduction in a prime lens for free!

When I said, “I want VR in a Nikon” this is what I meant. So you’ll forgive me if sometimes I look enviously at my Konica-Minolta (Sony) and (soon) Pentax compatriots.

And two more solutions…

Casio’s system tries to solve the shake problem in the digital signal processor. This means that the CCD samples the image over time across different photosites in the sensor in order to do in-software what anti-shake does with physical movement.

I’m not a big fan of this approach. It is not very known, but Casio had this feature as far back as my Casio Exilim EX-Z750 in one of their “Best Shot” modes. The shutter lag caused by this mode was intolerable and the quality of the output was “teh suck.” Once bitten, twice shy.

But don’t discount this approach; according to the latest reviews, cameras such as the Casio EX-Z1000, have fixed the performance and quality issues. A side benefit? Anti-shake right in the video recording. Couple this with Casio’s legendarily “shooter friendly” operation and Pentax sliding lens optics and you have a great consumer camera in a compact body.

The final approach I am aware of isn’t really image stabilization at all. Fuji has a system called “Picture Stabilization” which is really like an auto-ISO feature in a good dSLR system. It automatically adjusts the ISO in order to keep the picture stable when handheld. How can they get away with this and not suck? Well a dirty little secret about Fuji is that they make their own CCDs that have a unique function: two sensors per photosite where the extra sensor is a different size (different size means that they’re sensitive to different amounts of light). This gives cameras such as the Fuji F30 the ability to go to ISO 3200. Pretty impressive for a compact.

Stating the obvious

Whenever there is a discussion about image stabilization it is important to remember that this is not a panacea. What it does is replace the inconvenience of a tripod or monopod. What it cannot do is freeze the subject you are photographing. This becomes a huge factor as you use image stabilization more, you might forget your shutter speed is too low to prevent the subject from being blurry—something that happens all the time in event photography. At that point you have to resort to more traditional solutions: higher ISO or a strobe.

Finally, I haven’t looked into things very closely but it looks like all image stabilization systems only work in two dimensions. The world we operate is three-dimensional. How is this a factor? Well if you are taking a macro shot without a tripod, then shake toward or away from the subject might not be stabilized! (I guess at this point it should be called “focus stabilization.”)

Theoretically, this can be solved with an optical image stabilization system (or an anti-shake system where you float, instead of slide, the sensor), but I don’t know if any system solves this or if anyone cares to solve it. If anyone knows the answer to this, please let me know.

A pet theory

One thing I’ve noticed about using optical image stabilization (a lot) is that it is very hit-or-miss. Sometimes the system kicks in, sometimes it thinks you’re panning and doesn’t, and sometimes it gets overly agressive about your shots. Sometimes you can get “2 stops more”; sometimes it’s 4-stops or none at all.

I have this pet theory that the popularity of image stabilization coincides with the popularity of digital photography in general. Why? Because the cost to develop a shot and throw out a bad shot is low, offsetting the natural hit-or-miss nature of VR.

Sony has a new CCD sensor that can take 60 full-resolution frames a second. So what? Who needs that? Isn’t that just video you ask? Well an interesting thing about lossy compression is the principle: sharper shots compress less. I imagine in the near future where you could take a dozen shots of the same scene with one shutter press and the review of “the keeper” could occur automatically in-camera. God knows I’m getting tired of organizing all my stacks in Aperture with the loupe tool on.

At that point, the cost to using image stabilization is zero.

26 thoughts on “A look at image stabilization

  1. The discussion continues on PhotographyBlog. Particularly interesting is the question of high ISO solutions like Fuji.

    I won’t comment about collusion (I’m not as familiar with the market as they are, but my gut instinct is that risk-aversion and legal issues can account for many of those anecdotes), but there are theoretical limits to high ISO photography, in particular, the photon itself.

    The human eye has near-ideal characteristics for its aperture in terms of low-light sensitivity, in that the photoreceptors in the eye will depolarize with a single photo event. Now that isn’t perfect since these cells are transparent (hence the reflective tapetum in the eyes of cats), nor does a depolarization mean that the signal is transmitted to the brain (some processing is done by 3 other layers of neurons to be done to denoise and anti-alias the signal in-eye, believe it or not). But the general point holds: there are theoretical limits and that limit is the discrete nature of light itself. For the record, your eye takes about a 1/30 second exposure so if you are doing an eye comparison, you have to consider that too.

    When I read articles such as this, I begin to question it. Sure, we can theoretically beat the eye, but not by much. Ever use night vision goggles? Notice how noisy those things are? That’s because they’re basically creating a single electron event in correspondence to single photon events and then using a photomultiplier tube to magnify the single electron onto the back of a display. What do you think the minimum theoretical shot noise in the system is? (The answer is as much as the signal.) This system is CMOS which means the circuitry is next to the photosensor. “Nanotechnology” or not, how do you think they’ll eliminate the dark-current noise in this system? (They can’t—all computation systems give off heat.)

    When I was in college, a EE senior mentioned that using superconductors they were building transistors for use in supercomputers with no loss of energy. “That can’t be true.” I said, “Maxwell’s demon says that there must be a little loss in order to switch them or else you have a perpetual motion machine.” And yet, he and a number of others continued to argue against the obvious.

    Guess who proved right.

  2. @carpeicthus: Yes, I was thinking along the lines of the Coolpix 4500 when I read that Sony press release a few months ago. 🙂

  3. I forgot to mention another difference between optical image stabilization and sensor shift is how it looks the the viewfinder. When using an optically image stabilized lens, the viewfinder is also stabilized. When using a sensor shift style, what you see in the viewfinder will not be the exact image (there has got to be some cut off) in order to account for the effects of image stabilizing the sensor on shutter press.

    This only applies in the case of SLR photography. In the case of photographing in a compact camera using an LCD viewfinder, there is no difference.

  4. Doesn’t Optical IS have an advantage over sensor based though? (other than having the VF work).

    I thought it was seomething about having the stabalization as close to the axis as possible.

  5. @Tracer. Oh that makes a lot of sense and is a good point! You’d have to shift less to create an effect. But still the amount of shift is determined by the amount of shake, not the mechanism.

  6. There may be good reasons that Leica, Nikon and Canon are leaving in-camera anti-vibration to others: they make full-frame pro equipment and the mass of the sensor and the magnets needed to move a full-frame sensor around will likely slow its performance unless plugged into a car battery. The anti-shake in camera solution is only implemented on consumer-grade equipment.

    Secondly, is ant-shake really needed below 100mm focal lengths in most shooting situations. Sure, it’s nice to get an extra stop, when the sun is setting, but if you need anti-shake help when you’re shooting with an APS-C lens set to 20 mm when everything after three feet is on infinity, you’ve got to cut out the coffee. Either that or turn on the flash.

    Thirdly, including the anti-wiggle in the lens may prove more responsive as the you’re doing the correction before the light exits the pupil at the back of the lens. Finally, and who can say except the engineers who tested these things: in lens may be more robust but who knows?

  7. @Me Good points.

    The first sounds like a myth.

    The second, while true, is not quite true. As resolutions increase and when sensors are smaller (APS-C) then camera shake will kick in sooner. A question of need is a subjective one. Let’s say that anti-shake is more advantageous at larger focal lengths.

    The third is also correct, but not correct. However, having image stabilization in the optics does mean what the eye sees is what the camera records whereas sensor shift stabilization means that the image may be offset.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.