As we’ve previously discussed, the Gigamacro works by taking many (many) photos of an object, with slight offsets. All of those photos need to then be combined to give you a big, beautiful gigapixel image. That process is accomplished in two steps.
First, all of the images taken at different heights need to be combined into a single in-focus image per X-Y position. This is done with focus-stacking software, like Zerene Stacker or Helicon. After collapsing these “stacks,” all of the positions need to be stitched together into a single image.
On its surface, this might seem like a pretty simple task. After all, we’ve got a precisely aligned grid, with fixed camera settings. However, there are a number of factors that complicate this.
First off, nothing about this system is “perfect” in an absolute sense. Each lens has slightly different characteristics from side to side and top to bottom. No matter how hard we try, the flashes won’t be positioned in exactly the same place on each side, and likely won’t fire with exactly the same brightness. The object may move ever so slightly due to vibrations from the unit or the building. And, while very precise, the Gigamacro itself may not move precisely the same amount each time. Keep in mind that, at the scale we’re operating at (even with a fairly wide lens), each pixel of the image represents less than a micron. If we were to blindly stitch a grid of images, even a misalignment as small as one micron would be noticeable.
To solve this, the Gigamacro utilizes commercial panorama stitching software – primarily Autopano Giga. Stitching software works by identifying similarities between images, and then calculating the necessary warping and movement to align those images. For those interested in the technical aspects of this process, we recommend reading Automatic Panoramic Image Stitching using Invariant Features by Matthew Brown and David Lowe. In addition to matching photos precisely, these tools are able to blend images so that lighting and color differences are removed, using techniques like seam carving.
While this type of software works well in most cases, there are some limitations. All off-the-shelf stitching software currently on the market is intended for traditional panoramas – a photographer stands in one place and rotates, taking many overlapping photos. This means they assume a single nodal point – the camera doesn’t translate in space. The Gigamacro is doing the opposite – the camera doesn’t rotate, but instead translates over X and Y.
Because the software is assuming camera rotation, it automatically applies different types of distortion to attempt to make the image look “right.” In this case though, right is wrong. In addition, the software assumes we’re holding the camera by hand, and thus that the camera might wobble a bit. In reality, our camera isn’t rotating around the Z axis at all.
Typically, we fool the panorma software by telling it that we were using a very, very (impossibly) long zoom lens when taking the photos. This makes it think that each photo is an impossibly small slice of the panorama, and thus the distoration is imperceptible.
However, Dr. Griffin from our Department of Geography, Environment & Society presented us with some challenging wood core samples. These samples are very long, and very narrow. Even at a relatively high level of zoom, they can fit within a single frame along the Y axis. Essentially, we end with a single, long row of images.
This arrangement presented a challenge to the commercial stitching software. With a single row of images, any incorrect rotation applied to one image will compound in the next image, increasing the error. In addition, the slight distortion from the software attempting to correct what it thinks is spherical distortion means the images end up slightly curved. We were getting results with wild shapes, none of them correlating to reality.
Through more fiddling, and with help from the Gigamacro team, we were able to establish a workflow that mostly solved the problem. By combing Autopano Giga with PTGui, another stitching tool, we were able to dial out the incorrect rotation values and get decently accurate-looking samples. However, the intention with these samples is to use them for very precise measurements, and we were unconvinced that we had removed enough error from the system.
As mentioned earlier, the problem appears, on its face, to be relatively simple. That got us to thinking – could we develop a custom stitching solution for a reduced problem set like this?
The challenging part of this problem is determining the overlap between images. As noted, it’s not exactly the same between images, so some form of pattern recognition is necessary. Fortunately, the open source OpenCV project implements many of the common pattern matching algorithms. Even more fortunately, many other people have implemented traditional image stitching applications using OpenCV, providing a good reference. The result is LinearStitch, a simple python image stitcher designed for a single horizontal row of images created with camera translation.
LinearStitch uses the SIFT algorithm to identify similarities between images, then uses those points to compute X and Y translation to match the images as closely as possible without adding any distortion. You might notice we’re translating on both X and Y. We haven’t yet fully identified why we get a slight (2-3micron) Y axis drift between some images. We’re attempting to identify the cause now.
At this point, LinearStitch isn’t packaged up as a point-and-click install, but if you’re a bit familiar with python and installing dependencies, you should be able to get it running. It uses Python3 and OpenCV3. Many thanks to David Olsen for all his assistance on this project.