Pretty much every online photography forum contains bitter debates over what, how and when to enlarge images for print. Most modern software made for this purpose uses interpolation, an educated guess, to build new pixels into the enlargement.
It works pretty well but often at the expense of fine details.
Topaz, the software company founded by Albert Yang ten years ago, has been consistently innovative in the field of image processing and these days Topaz Labs, is used by many photographers.
Recently, Topaz launched a new product, A.I. Gigapixel and I have to say, that in the few hours I have used it, I am very impressed.
A.I. Gigapixel is software designed to enlarge images and it is important especially for those photographers who adopted digital very early because it is simply incredible at enlarging low megapixel, low-resolution images. It is good at enlarging hi-res images too!
I have a mind that it will make stunning enlargements of my medium format film photographs - I can't wait to try it!
A.I. Gigapixel uses 'deep-learning super-resolution'. Normally, there is no way to create high-resolution images from low-resolution images because, at any significant enlargement, the details are simply not available in the source image. A.I. Gigapixel goes about it in a different way; it uses Artificial Intelligence (A.I.)
A neural (self-teaching) network is exposed to millions of high-resolution and low-resolution image pairs. Over time, the network learns to synthesize 'plausible detail' in the enlarged image based on what it has previously seen and so construct the detail required to enlarge our images up to 600%.
The software makes use of batch image processing with your computers Graphics Card (GPU). It still takes a while to run a routine on a high-end machine but if you are enlarging for a print, it's not really an issue.
I decided to try the software for a different reason - I had a thought... Is this kind of software going to make extreme focal length telephoto's redundant?
I had a feeling, that even if this generational iteration of the software does not make the super telephoto lens redundant it could still make marginal images usable as well as improving the results from (possibly wider aperture) shorter telephotos. I had to give it a try!
I pulled up a few marginal shots of a lanner falcon photographed on an overcast day in the Kgalagadi. These relatively high ISO images were problematic because the dull light, and extreme speed of the lanner, coupled with bright background made capture difficult.
I had to find a balance between deep depth- of- field, fast shutter speed and ISO that could still bring out detail. At the same time, I had to use a shorter focal length because the birds were simply impossible to follow with longer glass.
Here's what the images look like without any enlargement:
You can see that the falcon is quite small in the frame, cropping into the image and framing it attractively renders an image that is usable on the web but that will not stand much enlargement for print.
Normally, I would be very conservative when enlarging and printing an image such as this. Natively, at 300 DPI it would produce a 7"x4" print. I would usually be reluctant to enlarge this beyond 14"x8" and I'd find a higher-resolution image to work with instead.
The normal process to enlarge this image would include using some kind of interpolation software. In my case, this is Adobe Photoshop CC.
I decided to try my normal method and compare it to A.I. Gigapixel at the same resolution - I doubled the size of the image in both pieces of software. The results are shown side-by-side below at 100%.
Photoshop is on the left and A.I. Gigapixel on the right.
This enlargement would yield a print of 14"x9" at 300DPI and 28"x18" at my self-imposed maximum size. Quite a leap from 7"x4"...
Here's another example, this time using a different enlargement algorithm in Photoshop - Photoshop is on the left and A.I. Gigapixel is on the right: I note that the A.I. Gigapixel has added a little contrast and I may need to adjust workflow to avoid this.
At 100% the difference is stark. I would like to spend more time comparing the results for different images and settings and this article is just a quick summary of my experiments so far.
But why and how would this affect our lens choices?
If I think about how I go about capturing this kind of shot, I think about buckets. Not literal buckets but buckets of shots.
Using a longer lens might land me a single great shot and moment. For each great shot and moment I get with 600mm I may get 3 such moments with 500mm and 5 such moments with 400mm.
The rest of the time, I may clip a wing or miss the action entirely due to the size, weight and field of view of the longer focal lengths.
There is a compromise of course! The 400mm shots are lower resolution and likely to yield more shots at significantly lower levels of detail - this is less of a problem for the web, but if I intend to print, then it is a BIG problem.
Does my logic start to make sense now? If we can recover/create this much detail in post-processing, then it may be possible to use a shorter and wider aperture lens to capture more shots with adequate levels of detail than we are able to with a 600mm lens.
Interesting times indeed.