• Cart
  • Checkout
  • My Account

Infrared Conversions, IR Modifications & Photography Tutorials | Life Pixel IR

The world leader in infrared conversions, modifications & DIY IR conversion tutorials. Scratched sensor replacement, UV & Full spectrum conversions.

The world leader in infrared camera conversions

866.610.1710
  • Home
  • → Start Here ←
    • 1 – Introduction
    • 2 – Filter Choices
    • 3 – Focus Calibration
    • 4 – Lens Considerations
    • 5 – Camera Considerations
    • 6 – Start Shopping
  • Galleries
    • Infrared Photography
    • Infrared Time Lapse
    • Infrared Cinematography
    • Infrared Stop Motion
    • Ultraviolet Photography
    • Forensic Photography
  • Education
    • Training Sessions
    • How To Shoot Infrared
    • AR Coated Infrared Filters
    • Lens Hot Spot Database
    • Photoshop Video Tutorials
    • Infrared Issues & Solutions
    • IR Basics in Photoshop
    • Camera Quick Start Guide
    • DIY Conversion Tutorials
    • IR Photography Primer
    • Secondhand Cameras
    • Wall Of Shame
  • Workshops
  • FAQ
  • Blog
    • Tutorials
    • Inspiration
    • Locations
    • Gear
    • News
    • Other
  • Why Choose LifePixel?
    • Submit Your Review
  • Contact
  • Shop

Medium Format Lenses – Part II

Several months ago, I wrote a blog about using medium format lenses on DSLR.  I wanted to continue this series with a little more information.  In my experience, one of the most understood characteristics of using non-standard lenses with different film/sensor size is crop factor.  Before going into this, let’s get a little background on what crop factor is and why it’s relevant to this blog.

When you use a lens that is intended for a different sensor size or film format, the lens can overfill or under-fill the sensor/film. Using a full format (35mm) lens on a DSLR with a crop sensor (APS-C, Micro Four Thirds, etc), the lens will project an image that is larger than the sensor size.  This overfills the sensor.  In other words the smaller sensor is sitting in the center of a larger image on the film plane.  When this happens, the sensor is only seeing a portion of image on the film plane.  This gives the appearance that the image is zoomed, when compared to the same image on a full format sensor.  The image below represents a full frame DSLR image (35mm format).  The inset square represents the crop factor of an APS-C sized sensor.

It’s important to note that there is no magnification or change in focal length.  It is merely a characteristic of the image size and the sensor/film size.  This is called crop factor.  The most common is when using a full format lens on an APS-C sensor camera.  This gives a crop factor of 1.6.  This is simply the ratio of the smaller image size vs the larger one.  Since different sensors have different aspect ratios (height to width) I prefer to use the diagonal distance for crop factor calculation, as it accounts for both height and width.  A full frame sensor is nominally 36mm x 24mm, which is 43.3mm diagonally.  An APS-C sensor is 22.4mm x 15mm or 27.0 mm.  So 43.3mm/27.0mm=1.6X.

The exact same situation occurs when we use a medium format lens on a DSLR.  I’m currently using Pentax 67 lenses on my full spectrum Canon 5D Mark II.  There is a big difference in film/sensor size.  So there is a significant crop factor.  The film size of a 6×7 is about 6 x 7cm, however the actual size is about 56.0 x 72.0mm, depending on camera format.  Using lenses for medium format on a full frame DSLR provides a crop factor of over 2X.  Using these lenses on an APS-C sensor provides a crop factor of over 3.4X.  In terms of  what was discussed earlier, a 6×7 lens on a full frame DSLR over fills the sensor area by more than 2 times.  It over fills an APS-C sensor by 3.4 times.

A 55mm lens is a 55mm lens.  It won’t matter if this lens is used on a full frame camera or a medium format camera.  The focal length does not change.  Only the field of view changes.  But if you take a photo with a medium format camera and then use the same lens on a full frame DSLR, the DSLR photo will appear to be zoomed by 2X.  But remember, it’s just seeing less of the full image. I did an experiment to validate this and took several Canon L lenses (intended for full frame sensors).  These were compared with images shot with my medium format lenses.  I overlaid them in Photoshop to make comparisons.  With these, it’s easy to see that the medium format lenses have the same field of view as the Canon lenses.

It turns out that crop factor is one of the best characteristics of using a medium format lens on a DSLR.  Most lenses are clearest in the center.  It’s toward the edge of the frame that aberrations occur.  Using the medium format lenses on DSLR’s uses sees the image through the best part of lens, the center.

I love shooting IR with the medium format lenses.  They are hefty and manual focus, but are superb for IR.  They fit the way I shoot IR and provide superb results.  I hope this information helps understand how crop factor affects the field of view and that there is no magic happening when you use medium format lenses on a DSLR.  I also hope it encourages you to do some experimentation of your own.  Happy shooting!

 

Filed Under: Tutorials Tagged With: 5d Mark II, Canon, Eric Chesak, IR Infrared, medium format

Bracketed Exposures for IR photography

What are bracketed exposures? If you’re familiar with this term, you know how useful they can be. There are multiple uses for bracketed exposures, but they are especially helpful in IR photography. Shooting bracketed exposures is where the camera is set-up to shoot the same scene, but at different exposures.

Nearly all stock cameras are meant to shoot in color. So when we get into the optics and start removing filters or adding other filters, the camera doesn’t work the same. The one area that really takes a beating is the metering. After a modification, the metering will still be fairly close. But shooting in IR or full spectrum will definitely change the way the camera’s metering system sees the world. I find that my full spectrum modified Canon 5D Mk II with a 740nm filter will usually meter a ½ to 1 stop (usually denoted by EV) brighter than a normal scene. Most of my other modified cameras were the same.

The shows a series of 3 bracketed exposures at -1, 0 +1 EV

When I shoot IR photos, I shoot bracketed exposures as a rule. I’ve had too many IR photos where I thought the metering was accurate only to find that there are highlights in the scene that are blown out (camera’s histogram is clipped on the RHS). Shooting bracketed exposures nearly always helps me recover these highlights or even allows me to process a different shot that is at the + or – end of the bracket.

How do you begin doing this? Well, most cameras these days will allow the use of bracketed exposures. This is where the camera will shoot 3 or more exposures for each image. Depending on how you set it up, the camera will typically shoot a normal exposure and one under exposed and another that is over exposed. Some cameras will shoot additional over/under frames and also allow you to skew how these different exposures are framed in the overall bracket.  I like to set my camera to shoot the bracketed exposures in high speed mode, so I can get the 3 images in rapid succession with a single shutter button press.

This is the menu option for setting bracketed exposures on a Canon 7D.  This one is set for -1, 0 +1 EV

My cameras (as to many) have several programmable settings where I can set f/stop, ISO, exposure mode, bracketed exposures, etc. So I have 2 custom settings that shoot only bracketed. On my camera, C1 is set up for 1 stop over and under. The camera will record 1 normally metered frame, one frame that is one stop under and one frame that is one stop over. C2 is the same operation except for 2 stops over/under. This makes it quick and easy for me to change the camera to different situations where 1 or 2 stops might be needed.

Many DSLR’s have the ability to set custom settings.  This one is a Canon 5D MkII

So why else would I shoot bracketed exposures? One great feature is HDR. If you’re shooting a scene that has both bright and dark elements or the scene spans more dynamic range than a single shot can record, HDR or some other technique of exposure masking or blending is the way to do this. It’s also very helpful to have multiple exposures when shooting on the shadow side of the Sun, or toward the Sun.  Many times you won’t see the need for HDR until after you return and are processing your images. It’s too late to do an HDR at that point. So by shooting bracketed exposures, you have the ability to do HDR or exposure blending on shots, after the fact.

This is an HDR of the 3 images shown above.

Isn’t shooting bracketed exposures going to wear out my camera? Won’t it take more memory? Yep, for both. Your shutter is now clicking 3 or more times for each scene. All of these shots have to be recorded on the memory card. Of the 7 modern DSLR’s I’ve owned, I’ve only replaced the shutter on one camera (my 30D), and that was at about 3700 clicks, for sure an anomaly.  Most prosumer DSLR’s are good for 100k -150k shutter clicks. I’ve never shot 100k shots on any of my cameras. But I’m not a professional photographer. I venture to guess that most other casual shooters are the same. As for the memory consumption, memory cards are cheap.

Another example of scene that benefited from having more than a single exposure

There is a little good news. If you focus and shoot your IR like I described in my last blog, focusing through live-view, the mirror will stay locked up. So the wear associated with the mirror flipping up and down is removed from this operation.  It also helps to use a tripod when shooting bracketed exposures, especially if you’re going to be using them for HDR. You can still align the images in post-processing. But it’s easier if the images begin with good alignment. I prefer to shoot all my IR with a tripod.

Scenes that are shot toward the Sun typically have a high dynamic range that benefit from having bracketed exposures

If you’re comfortable with shooting regular exposures with your IR photography, by all means proceed. I find that shooting bracketed exposures helps save many images that might have otherwise been unusable. Happy shooting.

Filed Under: Tutorials Tagged With: 5d Mark II, 5DII, Bracketed Exposures, Eric Chesak, full spectrum, HDR, Infrared, IR, Photography

Focusing a Full Spectrum Camera

If you’ve read any of my other blogs, you might know that I started IR photography as a spinoff of my astrophotography. Both of these types of photography have some similarities. First, most cameras need to be modified to shoot IR photos. For the exact same reason, you’ll need to modify your camera to shoot nebula-type astrophotography. This is needed because the internal UV/IR cut filter blocks the both the IR light for IR photography and the H-alpha light for shooting nebula (See my astrophotography series for more details).

When I first got started with astrophotography, I modified a canon 300D (Digital Rebel) with a full spectrum modification. I figured it would be the most flexible. Six years later, I still feel that way. I like the full spectrum modification as I can shoot astro, or any flavor of IR.  by adding an original white balance filter allows me to use the camera for regular color photography.

The biggest drawback of a full spectrum modified camera is the need for external filters. These block the light that would normally pass through the viewfinder. Lifepixel calibrates their IR modified cameras for autofocus. But when shooting IR with a full spectrum mod, you loose the use of the viewfinder.

When shooting the 300D, I would compose, focus and prepare the shot with the filter removed. I’d then screw on the filter and set the lens to the higher f/numbers and shoot. It was sort a crap shoot whether or not I’d get what I wanted. It did work and I shot many photos like this. One of my all-time favorites was shot with the 300D, using this technique.

I was enjoying shooting IR and wanted a better way to compose and focus my images. So my second modified camera was a Canon 40D, also modified for full spectrum. It was one of the first DSLR’s that had a live-view option. I found that this was the key to effectively using a full spectrum camera. Since the camera is modified, it sees right through the externally mounted IR filter. So live-view works quite normally. I used this camera for several years before upgrading to a slightly higher resolution Canon 50D. This camera also had a better live-view LCD, which made focusing much easier. Then I finally bought and modified a full frame Canon 5D Mk II. All my cameras were modified with a full spectrum modification.

When you shoot IR with live-view, you can see the scene just as the camera sees it. After all, it’s the main sensor shooting this live-view image. I found that shooting with a green white balance gives the images in the live-view window a more appealing color.  It is much easier to compose and focus. Having a custom white balance also makes the post-processing easier.

This is typical of what you’ll see on the camera’s LCD if you shoot without a CWB.

This is the same shot with a Green CWB frame and the camera set to use this frame for CWB.

The biggest problem for me was being able to see the LCD screen, while shooting in the bright daylight hours. I tried shading the camera with a black cloth draped over the camera. But this was pretty tedious and uncomfortable.  So I bought a Hoodman loupe and never looked back. This allows you to see the LCD very clearly. On many cameras you can also zoom live view, which will further improve your focusing with the loupe.

Keep in mind that using the LCD for composing and focusing will consume more power than viewfinder methods. So be sure to carry an extra battery or two. Alternatively, if you use a battery grip you’ll have longer sessions before a battery change is needed.  This comes at the expense of portability.

The camera & loupe can be a handful to manage if you’re doing hand-held shots.  So I resolved myself long ago to shooting with a tripod. I made a custom tripod which is a little more compact and works perfectly for my IR set-up.  But nearly any tripod will work, as long as it is stable.

Focusing an IR modified camera can be a challenge. So I thought it might be worth reviewing this topic again. With a little kit and a little practice, focusing becomes an after thought allowing you to concentrate on the other aspects of getting a great image. You don’t have to have a full spectrum modified camera to use this technique. But you should use this technique if you have a full spectrum modified camera. Practice, have fun and happy shooting.

Filed Under: Tutorials Tagged With: 40D, 50D, 5D, Astrophotography, Eric Chesak, full spectrum, H-alpha, hoodman, Infrared, loupe

If Your Eyes Could See… Part 2

In Part 1 of this series I presented a few color astro photos that represent what you’d see, if your eyes were super sensitive. In part 2, I’m presenting similar images, only these will be presented in one color, the color of H-alpha. Hydrogen alpha is likely the most important emission, for imaging the night sky. In my astrophotography blog series, I discuss the importance of H-alpha and how to image these nebula with a modified DSLR.

The Veil Nebula Complex

 

The Great Orion Nebula

The images in part 2 were all photographed in the H-alpha wavelength (656.28 nm). The exposures are long. The equipment is expensive. The tracking is critical. But the results are some of the most stunning images that I’ve ever photographed, all of which are invisible to the naked eye.

The California Nebula

All of these H-alpha images required a series of 30 minute long exposures. These are then stacked and processed to achieve the final result (again, see my astrophotgraphy series). Like their color counterparts, the subjects of these images are so dim that they are invisible to the naked eye. This makes locating the subjects somewhat tricky.  The use of a computerized mount reduces the time needed to get the telescope pointed at the target. Then it’s just a matter of fine tuning and framing. The focus is set, , the guide camera is calibrated, the filter wheel rotated to the proper filter and the exposures begin. Thirty minutes later, I check the resulting image to see if I hit the target as intended. If so, the imaging continues until the object is too low in the sky to continue.

The Heart Nebula

 

The Jellyfish Nebula

Sometimes, I’ll look up an uncommon object and point the telescope in the general area and shoot a test exposure. Many times, this technique isn’t too fruitful, but once in a while, a gem is recorded. This is the case of the image below. I scoured the web looking for similar images, to no avail. So this particular area, rarely photographed, is one of my favorite subjects.

B30 and Friends

Probably one of my all-time favorites is my mosaic of the Orion area.  This is an 8 frame 60 megapixel mosaic that required many nights to shoot and many more nights to assemble and process.  Anyone that has processed very large images in Photoshop will sympathize on the amount of work required of the computer and it’s operator.  Each frame was individually processed.  When they were all complete, each one was registered in a special piece of software called Registar.  Then all 8 were imported into Photoshop, assembled, blended and processed. more than 40 hour of post-processing was performed on this image alone.

The Orion Complex Mosaic

Imaging deep sky targets is not for everyone. It can get complicated quickly, with steep learning curves on both the imaging and post-processing sides. Imaging with a DSLR can be a superb entry into this field. If your interests lie in photographing H-alpha, like the images here, the DSLR will need to be modified, and an H-alpha filter purchased. An astro-modification or a full spectrum modification can be performed to allow the proper H-alpha wavelengths to pass. My preference is the latter for the maximum throughput and flexibility.  It allows my DSLR to be used for astrophotography, IR photography or any other application I can dream up.

The Horsehead Nebula

I hope you’ve enjoyed this short series highlighting some of my favorite images. A modified DSLR is a great way to get started doing astrophotography.  If you’re interested in giving this a try, take a look at an H-alpha modified DSLR or a full spectrum version.

Filed Under: Inspiration Tagged With: Astrophotography, Eric Chesak, full spectrum, H-alpha, hydrogen alpha, monochrome

If Your Eyes Could See… – Part 1

For those of us that shoot IR photos, we already have a glimpse into what the world looks like illuminated in the invisible light of infrared. It has fascinated me that photos photographed in this light can have such interest and depth. Similarly, I have seen things in the heavens that only those with the appropriate telescope and imaging equipment have seen. I say “seen”, but it in reality, our eyes are not sensitive enough to actually see these magnificent & hidden astroscapes.

In this series, I’ll be showing a few of my deep sky astrophotos.  These were all shot with my widefield imaging equipment. First covered will be the nebula shot in “color”. The camera (a cooled, full frame CCD) is monochrome. So the color is assembled by shooting through a series of filters and assembling the color images in Photoshop. There are a couple RGB images that contain only red green and blue light and others shot through narrowband filters. You can also review my Astrophotography series, for more detail on some of this, including shooting with a DSLR.

M45 – The Pleiades

M45 is a beautiful open cluster that’s a little difficult to photograph.  It’s a reflection nebula, which means the dust that is visible is being reflected from the nearby starlight.  It needs to be imaged with RGB filters, instead of narrow band filters.  So it is much more affected by light pollution.  Even so, this image was shot from my backyard in a fairly heavily light polluted area.  There is much more dust and nebulosity to be seen here when imaged from darker skies.

Tulip Nebula

If your eyes were much more sensitive, the night sky would look very different. Most of these images represent a field of view of about 4 x 8 full Moons. So the features are large and would be prominent in the night sky. Imagine looking out your window and seeing the Tulip Nebula rising from the East.

A telescope’s main function is to gather light. This is one of the purposes for larger and larger telescopes. Resolution is also improved, but let’s just look at the light gathering ability. Compare the diameter of a telescope’s aperture with the pupil in your eye. This large aperture gathers many times more photons than your eye alone. The larger the diameter, the better the light gathering and the easier it is to see faint objects.

M42 _The Great Orion Nebula

With the exception of the Orion nebula (shown above), most of the objects in photos shown here are not visible to the naked eye. The additional light gathering ability of the telescope helps to increase the visibility.  Long exposures improve the image depth and visibility even more. This basically stacks more and more photons on the film or CCD until the image is visible.  All of the images shown here contain at least several hours of integration time.  As an example, the California Nebula was photographed with 6 filters, RGB and 3 narrow band filters over a period of 7 nights.  This resulted in a total integration time of 18 hours.  This may seem excessive, but image stacking helps significantly reduce the image noise.  Even images from a modified DSLR produce fantastic results.

RGB Barnard 30 & Sh2-264

The image above was shot only with RGB filters and exposures of 5 and 10 minutes.  The total integration time was 2.5 hours.  I wanted to point out the difference of this image and the one directly below, which also includes data from 3 additional filters, Hydrogen Alpha (H-Alpha), Oxygen III and Sulphur II (narrowband filters).  Each narrow band exposures were 30 minutes in length.  Many were recorded over several nights bringing the total exposure integration time to nearly 20 hours.  As you can see with longer the exposures, much more detail is visible.

HaRGB Barnard 30 + Sh226

Each of these images also requires a significant amount of processing time.  The individual monochrome image stacks needed to be processed.  Then the data from each filter color needed to be color mapped, aligned and overlayed.  Some final processing is done and the image is complete.  At least, that’s the way it’s supposed to work.  I always found that I never seemed to actually finish any image.  I’d continually tweak and adjust until I was happy, each time thinking it was done.

IC2177 – The Seagull Nebula

My imaging telescope is considered widefield (530mm f/5).  It provides lower magnification, in favor of wider views of the night sky.   Although slightly magnified, the images would still appear fairly large if you could see with super sensitivity.

SH2-129 – The Flying Bat Nebula

In the next part of this short series, we’re going to take a look at similar celestial views.  However, these images were recorded using only a single filter. I’ll share some of my all-time favorites in my favorite formats.  Stay tuned.

 

Aside:  Did you know that Life Pixel does camera modifications for astrophotography?  As I described in an earlier astrophotography blog , most stock cameras need to be modified to be able to see the all-important hydrogen-alpha emission.  This emission is deep red and is blocked by most stock camera UV/IR cut filters.  Replacing this filter with a modified version that passes the H-alpha emission is very important for the highest sensitivity and best results.  Alternatively, the camera can be modified for full spectrum use and external filters added for astrophotography use.   You can find details in the links below:

Full Spectrum Modifications

Hydrogen-Alpha Modification

Filed Under: Inspiration Tagged With: Astrophotography, Barnard Dark, Bat, Eric Chesak, full spectrum, H-alpha, Ha, HaRGB, IC2177, M42, M45, Modification, Narrowband, Nebula, NGC1499, Orion, Pleiades, RGB, Seagull, Sh2-129, SH2-264, Tulip

Diffraction and IR Photography

A few years after I began my photography adventure I took a photography class that came free with my first SLR camera. I thought I knew a lot about photography.  After taking the class I thought I knew all there was to know about photography.  Funny thing is that all these years later I’m still learning, almost on a daily basis. One photography class take away for me were all the details about depth of field and aperture. When I went to any action event, I’d shoot at high f/numbers so everything was in focus (before the days of auto focus).  Some years later while working on my undergraduate degree, I became interested in optics. I was hired to work in an optics research lab (see my holography post).  It was then I started learning about the wave nature of light and how diffraction occurs.  I mostly worked with lasers.  So most of my diffraction experience was with monochromatic light (one color). Below is a shot of textbook linear laser diffraction through a small opening.  This is the effect on light as it passes through small openings.  This also occurs in photography with all the colors diffracting different amounts, but occurring simultaneously.

My interest in photography began to mesh with what I had learned about optics. I had always wondered about white light diffraction, especially in camera optics. It turns out that diffraction can be a fairly significant issue with camera lenses. You can shoot everything at f/22 and have great depth of field and focus. But there’s a trade-off.  As the aperture size decreases, diffraction increases and becomes more visible. There is diffraction at all f/numbers though its more pronounced at smaller apertures.

I like to understand where all my lenses are sharpest and where they are soft. So when I added another medium format lens to my fleet, I decided to do a little more testing. Interestingly, my 165mm f/4 LS medium format lens has a minimum aperture of f/32. At this aperture diffraction makes the image quite soft.  It is so evident that it’s even evident in the live-view display.

I shoot IR almost exclusively with medium format lenses. Check out my blog topic on medium format lenses. They are huge,  but I really like using them. Here’s a comparison to a 50mm f/1.4 Minolta manual focus lens that I bought with my 35mm SLR. These are the lenses I included in my testing for this blog (except for the Minolta lens).

Left to right in the rear: Pentax 67 55mm f/4, Pentax 67 75mm f/4.5, Pentax 67 165mm f/4.  Front row: Minolta MD 50mm f/1.4

I found some level of diffraction in all three of the lenses I tested.  On this particular test subject (my neighbor’s palm tree), diffraction is much less evident in the 55mm and 75mm lens, but very clear in the 165mm lens.  This is partially due to the minimum aperture size of f/32. The animations below are all 740nm IR photos, shot with a custom white balance.  No other processing was done.

Pentax 67 55mm at  f/4, f/11 and f/22

Pentax 67 75mm at f/4.5, f/11 and f/22

Pentax 67 165mm at f/4, f/11 and f/32

Diffraction is inversely proportional to aperture diameter. So diffraction is less visible as the aperture diameter increases. Unfortunately, diffraction is proportional to the wavelength of the light being shot.  This means more diffraction at longer IR wavelengths. For those of us shooting IR, we will likely have to use a slightly larger aperture vs. when shooting the same equipment in color. If you shoot IR around 740nm, diffraction will be about 30% worse than at the center of the visible spectrum (about 550nm). That seems significant but doesn’t seem to affect my images much. However, you should keep this in mind if you’re planning to shoot IR at small aperture diameters (large f/numbers).

You may have noticed something else that is evident when shooting at small apertures.  Any dust or streaks that happen to be on the lens, filter or sensor become much more evident at smaller apertures.  Take a look at the speck to the upper right of the bird in the 165mm image above.  The dust spec is invisible in the f/4 image, but very clear in the f/32 shot. Yet another reason to open up those apertures.

First test shot from my new 75mm Pentax 67 medium format lens (shot at f/11)

I find that most of my lenses are sharp around f/11 when shooting at 740nm. So that’s where I mostly shoot. It matters little to me if the exposures are longer. I always shoot IR with a tripod so I can take full advantage of the lens sharpness. If you’re interested in some light diffraction theory, there’s an interesting article I found discussing diffraction and photography.  One last detail to consider.  The onset of diffraction occurs earlier in crop sensor cameras vs. full size sensors.  So if you’re shooting with a smaller sensor, you may need to open your aperture more than would be necessary on a larger sensor.  Lots to keep in mind.

So what does this all mean? Well, if you typically shoot at large f/numbers, with a crop sensor camera and/or shoot IR photos, you can probably increase your image sharpness by opening the aperture slightly. Do your own tests on the lenses that you use most frequently. It’s important to know where your lenses perform best whether you shoot color or IR. Most importantly get out and shoot.  It’s the only way to learn.

 

 

Filed Under: Tutorials Tagged With: aperture, Diffraction, Eric Chesak, f/number, Infrared, laser, medium format

A Different Kind of “Photography”

When I was a college student, I was working as a welder on a construction site. I was an Engineering student at that time. But science was always something that had my interest. Many folks, interested in science or not, have a fascination with lasers. I was no different. At that time semiconductor lasers (diode lasers) were rare and mostly infrared. But gas lasers were fairly common. I spent several paychecks on a small helium-neon laser from a company called Spectra Physics. It was something that I had always wanted and now I had one. It was cool and I did all sorts of experiments with it.

After I spent a lot of time shining it at things, I decided to try to put it to some real use. I wanted try my hand at making holograms. Now to call holography a sort of photography is technically wrong. There are no lenses for the image formation as in regular cameras. The recorded images are not as a result of an image being focused on a film plane. Holography is an interference phenomenon. This is where two mutually coherent forms of monochromatic light are recombined and a pattern of light and dark lines are formed and recorded on film. Let me discuss this in a little more detail.

Interference is also the principle where soap bubbles get their color. But it is what makes holography possible. Because of the wave nature of light, it has the ability to interfere. In areas where the peaks of the wave correspond, there effect is a bright spot. Where the troughs match, there is a dark spot. See my blog on infrared haze reduction for a brief description of waves.

Holography works in this manner, but on a much more complicated scale. A laser beam is split in to 2 beams. One beam is spread and used to illuminate a film plane (reference beam). The other beam is spread and illuminates the subject (object beam). The light from the subject scatters and hits the film plane.  Rather than construct a graphic, I scanned part of a page of my old holography lab manual.  This is an example of a working set-up that I used frequently.

When the scattered light off the object interacts with the reference light on the film plane, the mutually coherent light interferes creating light and dark interference patterns.  The light is coherent because it’s of the same wavelength and phase, mutually coherent because the 2 beams are split from the same source. These complex interference patterns are what are recorded by the film and make up the hologram.

At that time, most holography was recorded on very high resolution silver halide film or glass plates. Depending on how the hologram is set-up and processed made a difference on how the hologram was replayed.  Some holograms can be replayed with white light (like on your credit cards).  Others must be re-illuminated with the reference beam (reflection and transmission holograms, respectively).

This is a reflection hologram I recorded on a glass plate.  It’s being illuminated by sunlight.

The following are examples of transmission holograms being replayed with laser light. The photos can’t capture the essence of how stunningly realistic these look in person.

I uploaded a short video of the hologram above that shows the front view and also the area behind it.  It gives a good indication of the strange experience of seeing the hologram and expecting to see the subject behind the film plate.

A transmission hologram has a truly breathtaking degree of detail and realism. They are very difficult to photograph, as the laser speckle tends to give the images a grainy appearance. What is  most amazing is that the camera must be focused on the reconstructed image of the subject behind the film plane,  not the film plane where the hologram is recorded. Even though the subject is not physically present, the hologram reconstructs the scene so precisely that the camera’s optics believes that the subject is still present.

This accuracy of reconstruction lead to an entire industry of holographic stress analysis and non-destructive testing. The details of this is a subject for another time. But here are a couple fun examples displaying the sensitivity of the process.  The first is a soda can under stress by loose fitting rubber band. These bands correspond to a dimensional change of about 12 millionth’s of an inch.

The next is an example of the expansion of the glass envelope of a light bulb.   The heat from the filament caused the glass to expand, which is shown in the series of topographic-type rings.

One of the trickier aspects of holography is to make sure that the set up is ultra-stable, interferometrically stable.  As a photographer you might think that a tripod is stable. Holography requires stability of an entirely different level.  Remember I mentioned the interference? If this pattern is disturbed, the hologram is destroyed. Any movement of the arrangement or optics, mirrors or the subject itself (on the order of 12 millionth’s of an inch) will cause the interference pattern to loose contrast thus making ruining the visibility of the hologram.

I built a small concrete table into which included threaded fasteners. This 300 lb table was placed on a stack of cinder blocks onto which partially deflated inner tubes were laid. This set-up absorbed any seismic vibrations.  The large mass helped to damp any other vibrations.  The optical holders for the lenses and mirrors were bolted to the top of the concrete for additional stability. Even air currents had to be minimized to prevent disruption of the interference pattern.  There are even more details involving the laser and the split laser beams which also have to be kept in check. When I saw my first home made hologram, I was stunned that it all worked. I was so interested in this technology that I attended a symposium on holography at New Mexico State University. I met a lot of great folks and learned more than I ever thought I needed to know.

One day while walking down the hall in the local Physics department I ran into one of the sponsors of the symposium. We chatted for a while and I offered to show him my set-up for holography. Interestingly, some weeks earlier a theft occurred in the department’s optics lab. All the lab’s holography equipment was stolen. On the day of my show & tell (in my parents garage) the professor that ran the optics lab also showed up to see my set-up. I’m sure he expected to find all the stolen equipment from the ransacked physics lab. But I had machined all the mounts and had built the table myself. Unbeknownst to me, they were working on several holography-related projects and I was immediately offered a job.

It’s been a few years since I’ve done any holography, but am planning to give it a go again sometime. Laser technology has advanced significantly and the process might be easier to tackle the next time around.  If I never get back around to it, I’ll always have these holograms to view and share.

 

 

 

Filed Under: Inspiration Tagged With: Eric Chesak, Film plate, Hologram, Holography, laser

Manual Panorama Assembly

The weather is perfect, the lighting is just right and you’ve just finished shooting a 3 frame panorama of an interesting scene. You’re anxious to get home and assemble the images. So you grab a cup of coffee, load up your images and proceed with the panorama assembly. Then the reality hits you that you didn’t shoot the panorama in manual and the auto-assembled image looks unusable.  What can be done to save this panorama? There are probably many programs out there that do a better job with assembling panoramas than Photoshop. But I use Photoshop and have saved many panoramas in the way I’m about to describe.

I made a similar mistake on a panorama of an interesting shot while in Norway.  Could I re-shoot it?  Maybe.  But many compositions are once in a lifetime shots that can never be duplicated. I processed my images and was disappointed with the result.  Below is the result of how Photoshop’s auto panorama routine handled my files of unequal exposure.  I’ll show how to get a better result than this using a manual technique. Stemming from my astrophotography, I learned how to do manual panorama assemblies which can sometimes salvage shots that can’t be assembled properly by Photoshop.

Let’s get started.  Fire up Photoshop and load up your pano images in separate layers. You can do this manually by opening up each image and doing a Select All and the Copy/Paste. But I like to use the script Load Files Into Stack.

When you’re done you should end up with something looking like this, with your pano images in separate layers (lower RHS).

The images then need to be aligned for the panorama. This too can be done manually but using the Auto-Align Layers is generally the easiest and fastest option.  It generally does a very good job. Highlight all the layers in the Pano and click Edit then Auto-Align Layers. I usually just choose the Auto option and let the computer do its work. On my antique mobile workstation and with my 5DII images and Photoshop CS5, this takes a while. When it’s done, you’ll probably end up with something that looks worse than what Photoshop assembled. But be patient.

Having the image in layers like this gives us considerable flexibility to manually blend the layers to produce a usable image. You first might make some curves or levels adjustments, to try to match the brightness of the various layers. It won’t be perfect, but the closer the better. You can also arrange the layers changing which layer is on top. This helps to find the best overlap to aid in the manual blending process.

The magic occurs when the various layers are masked to manually blend the image. The secret here is to use the image that is covering the majority of the scene and manipulate the masks so that it blends the various elements of the image, letting through the images below. Add a layer mask to this layer and then invert the mask (so it’s black). Choose a medium sized paint brush and paint the mask in white to reveal the areas of the image below that you want to see. I start with masking the layer that has the most features that most need hiding or blending. It takes a little practice to see what needs to be hidden and what needs to be revealed and which layer is best on top. But try several arrangements and choose the best result. Below is the result of my layer swap.

Here’s the same image that’s partially masked using a small paint brush with soft edges.

Continue to paint the various features to hide and reveal the areas of the image that provides the best blending. I use a smaller brush with soft edges in a jagged path in areas of finer features. I also like to paint little features so that they lie entirely in one or the other frame.  On this image, the crane hook is a perfect example.  It split the frame.  But whenever possible these important features should lie on one frame.  So try to blend the image accordingly.  After a little work and experimentation, the image should begin to come together. Here’s what part of my mask looks like.

Depending on your image, you’ll need to duplicate this process on several layers to encompass the entire scene.  Sometimes I’ll also do a little blurring of the mask, to help blend the masking even more. It’s not always needed but can sometimes be helpful. If you mask too far and hit the edge of the image below you can step backwards or paint it over again with a black mask color to hide it again. If you’re not familiar with Masking in Photoshop, I’d encourage you to do a little research.  There are many masking techniques that can be used and help with manually blending the panorama.

When you’re happy with the image, you can flatten the layers and proceed with the rest of your processing needs. The image I used in this tutorial is a custom white balanced IR image. I generally convert these to B&W, add a little contrast, touch-up and complete the image. Here’s the final result of this panorama.

If your panoramas have a larger exposure difference between frames, you’ll need to do more work on the front of the process.  When I forget to shoot my pano’s in manual mode, the resulting exposure difference is usually pretty small and this process works well.   This is certainly not a catch-all process. But I can typically generate better results with my manual method vs. what Photoshop does with the automated assembly.

I’ve also use this manual blending technique on a very large 8 frame panorama of the Orion Nebula (over 63MP). The automated results were nowhere close to what I wanted. So I had to manually assemble and blend this image. So the next time you have some panorama images that you thought might not be usable, try a manual panorama assembly and see if you can recover the image into something usable. Happy shooting (and processing).

 

Filed Under: Tutorials Tagged With: blending, Eric Chesak, Infrared, Layers, Mask, Panorama, photoshop, unequal exposure

Short Sticks

Do you use a tripod?  I do for almost every shot.  The way I shoot IR necessitates the use of a tripod.  Could I hand-hold my IR shots?  Probably.  But I love to shoot panoramas and in poor weather.  I’m also a sort of a purist when it comes to getting the most out of the camera and lens.  So I always use a tripod. This isn’t just another tripod tutorial.  You can find enough of those on the web already.  What I wanted to do here is share my struggles with finding the right tripod and how I solved the problem.

For those of you that follow my blog posts, you’re probably aware that I started my IR venture as a spin off of my astrophotography.  The stability requirements for long exposure astro photos are much higher than that for photography.  I realize that it’s not an apples-to-apples comparison as equipment for astrophotograpy is typically much heavier.  But I guess my need for absolute stability rubbed off.  When I started looking for a tripod for my IR photography, it should go without saying that my first requirement was a high degree of stability.

I shoot IR with a full frame DSLR and heavy lenses .  I love the medium format lenses.  Did I mention that they are heavy?  Add any filters, lens converters, a battery grip and L-bracket and you’re into a fairly robust system.  Even so, many of the lightest tripods will hold the weight of a camera and a lens in ideal conditions.  I find however, that the load ratings of some units are a little misleading. Sure these ultralight sticks may hold the load.  But I hate having to fiddle with the tripod and wait for everything to flex back to a stable position after making a ball head adjustment.  So I knew I wanted a tripod that could also carry a decent load without flexing.

I also like to shoot down low.  The desert has some interesting foreground elements.  So I also wanted my tripod to be able to work well at low levels.  Almost any tripod these days can splay the legs and get low.  But this is where the strength of lighter tripods can be compromised.  Splaying the legs can reduce the load capability and place additional demands on the materials.  Adding to the list, I also wanted something that was fairly compact, for traveling.  So with my laundry list in hand, I set off looking for the tripod that would fit my requirements.  I never did find exactly what I wanted.

So after some thought, I decided I’d experiment by adapting a stock tripod to fit my needs.  I’m fairly handy when it comes to machining, metalworking and the like.   I felt confident that I’d be able to make the necessary modifications.  I dug around the web until I found a possible candidate, an aluminum leg Benro A3580F.  It met a couple of the initial requirements.  It is stable and has a good load rating (about 6x my camera load) and was also fairly inexpensive.

I did a little reconnaissance to see if I was going to be able to work some magic on this tripod.   It turned out to be a fairly simple design and I decided to proceed.  I began by removing and disassembling the legs.

Once I had the legs completely apart, I proceeded by cutting off 5 inches off each segment of each leg.  Cutting each leg segment the same amount kept the symmetry of the design.  I then cleaned up the cuts and reassembled.  Voila, a shortened tripod.  There were some other details not mentioned.  But in general, it was a fairly simple modification.

Here’s the finished product.  It’s substantially shorter than stock.  But it still maintains the same (or probably higher) load rating, especially when the legs are splayed.

Here’s a comparison with with one of my larger tripods.  I still use the larger tripod for portraits and when I need additional reach.  But for nearly all my IR photography, I take the little custom Benro.

Here’s an action shot with the shortie and my 5DII with a Pentax medium format 55mm lens.  The set-up is really rigid and required a lot less fiddling when trying to set up a shot.  There is almost none of the flex that appears with lighter tripods. It’s hard to describe the satisfaction, unless you’ve experienced such rigidity.

This thing is like a little tank.  I call it my “tankpod”.  It’s short, really stout and can travel almost everywhere.  When I need more height, I can extend the leg segments and get up to about 45 inches (to the  base where the ball head mounts).  I usually use it with the segments retracted or only the first segment extended. When folded up, It travels well and fits in a carry-on bag.  It’s not a Gitzo or RRS, so the build quality is not the same, but is still quite adequate for me.

The last time I presented my shortened tripod to a group of photographers I got a lot of eyebrow raises and eye rolls.  It might not be the thing for you.  But this modification turned out a tripod that suits my needs perfectly.  It now goes with me everywhere I shoot.  So if you’re handy and need something specific in a tripod, think about out what modifications might make it what you need.  Happy shooting.

Filed Under: Gear Tagged With: Eric Chesak, Infrared, medium format, Rigid, Short, shortened tripod, tripod

Medium Format Lenses on a DSLR

It’s no surprise that photography equipment is expensive.  Think of the engineering and workmanship that goes into lenses and DSLR bodies.   As with everything, there are always levels of expense.   Lets examine lenses specifically.  The professional-class lenses tend to have fewer issues.  Yet, trade-off’s always exist with either professional or consumer glass.  So we spend our time and money looking for the best equipment for our hobby or profession. Sometimes the limitation is money and sometimes it’s just the lack of that ideal piece that we need.  For those willing to sacrifice some modern features there may be options to get top-quality glass at a fraction of the cost of similar modern equipment.

_mg_9861framex766Here’s a Gerbera Daisy, shot with a medium format 55mm lens

As a photographer, I’m always looking for an edge, especially with my infrared images. As an engineer, I’m digging into the technical details of my photos & equipment.  I always look for ways to improve my set-up.  Many IR photographers have their cameras converted to dedicated IR use (with a fixed internal IR filter over the sensor).  These converted IR cameras can be calibrated to autofocus in IR so they shoot just like a stock camera.  The viewfinder can still be used for framing. However, for folks that use an external filter on full-spectrum cameras (like me) this is not an option. Longer wavelength filters block the viewfinder from transmitting visible light.  So I always use the camera in live-view mode and use the LCD and a loupe to focus my lenses manually. Since the same sensor that records the photo is also producing the live-view image, framing and focus are easily managed. I always manually focus my IR images using this LCD technique.  It gives me confidence that my images are, at least,  focused.

In addition to photography, I have spent many years doing serious astrophotography. Several years back I had read about astrophotographers coupling medium format lenses to cooled CCD cameras.  I wondered why these lenses so popular with astrophotographers.  Beyond the obvious point of getting a wider view of the sky, a quick examination of the physical layout quickly made it clear.  Medium format lenses are used primarily because of their longer flange-to-focal plane distance.  On many Astro CCD set-ups filters, guides and other equipment can lie between the telescope and the camera.   As an example, look at my wide-field (530mm) astrophotography set-up below.  See all the equipment between the camera (box on the LHS) and the telescope? If you remove the telescope and replace it with a lens, this lens would require a fair distance between where it mounted to where it focuses.

dscn5082-lnn-ax766

Below is an example of how a dedicated medium format astro set-up is constructed.  A custom machined the lens mount and motorized focus mounting bracket hold the medium format lens for coupling to the CCD.  This assembly is then attached to an equatorial mount, in place of or in addition to a telescope.

img_0909x766

(Image courtesy of Craig & Tammy Temple)

All the astrophotography work had me wondering how medium format lenses would work on a DSLR.  They work very well.  Some significant advantages exist when using medium format lenses on a DSLR.  The largest advantage is cost.  These lenses are seen as obsolete.  So many of the older manual focus, medium format lenses are available quite reasonably. With a little research and patience, some excellent deals can be had. I use the Pentax 67 format, but many others should work equally as well.

Another advantage has to do with the film size of medium format cameras.  Pentax 67 is a later version of the Pentax 6×7 format. This format came from the film size of 6 x 7cm. That’s a whopping 60 x 70mm. A standard DSLR full frame image format is 24 x 36mm. Medium format lenses will overfill a full frame sensor by a substantial amount. Why is that a big deal? Well, think about your lenses. Most of the bad things that happen to images occur on the fringes of the frame (chromatic aberration, coma, vignetting, etc). With a medium format lens, you’re shooting through the sweet spot.  I was pleasantly surprised at the image quality of the vintage medium format lenses that I tried. I have some decent professional DSLR glass.  But any of the medium format lenses that I’ve purchased give equal or better results. Here’s an example, a 2 frame panorama shot with a medium format 55mm prime lens.

_mg_0058framex766

This is a heavy snowfall in the desert of Far West Texas. In the distance are the Franklin Mountains that run through El Paso, Texas

Using a medium format lens has another advantage. If you understand how tilt-shift lenses work you may see where I’m going. With an OEM tilt/shift lens overfills the sensor. This allows the lens to be shifted and the image to still fall on the sensor.   With a normal lens, the image will shift off the sensor or become heavily vignetted.  For the price of one shifting adapter, you can shift any of your medium format lenses. If you read my blog last month you’ll know that I prefer to shoot panoramas with a shift lens. For fraction of what I paid for the spectacular Canon EF-24mm TS-E Tilt/Shift lens (also manual focus, by the way), I purchased an adapter and a small fleet of medium format prime lenses.

the-lonely-road-766

This is a 3 frame panorama of northern Norway, shot with my Canon 5DII, a 55mm Pentax 67 lens and a shift adapter (see below).

dscn7412x766

This is a Canon 5D Mk II coupled to a Pentax 67 55mm lens using a shift adapter.

_mg_0022a-x766A test shot using my 150mm Pentax 6×7 lens (my oldest medium format lens)

It’s not all roses, though. Most medium format lenses are huge. I mean huge, and they are heavy. They are generally much larger than their 35mm format counterparts.  However, the size is what provides that impressive image circle.

dscn7413ax766A monster of a 55mm lens and probably one of the sharpest lenses I’ve ever used, in any format.

A disadvantage (for some folks) is that the older medium format lenses are all manual focus. So you have to be comfortable with shooting manual focus on an IR camera (and understand manual camera operation if you plan to do Pano’s).  If you have an IR shooting style like mine, using a manual focus lens is inconsequential.  I manually focus my AF lenses for IR anyway.  You’ll also need to purchase an adapter (either fixed or shift) to interface to a DSLR.  Finding the proper adapter for your camera and preferred lens brand may also be difficult. You also have to do your research to get the lenses that have the best image quality. Just like modern AF lenses, some were lemons and some were stars. I only pick medium format lenses that have the best reviews for image quality or sharpness.  Finally, since many of these lenses are older you have to look out for dust, grease and fungus on the optics. I usually try to purchase the latest model of the particular focal length lens. So do your research, ask questions and shop carefully.

If you can get through all the details and decide to try shooting with medium format lenses, you’ll definitely have some seriously nice glass. You also get them for a fraction of what a similar modern lens might cost.  Hopefully, my experience will shine a light on the pros and cons of using medium format lenses on a DSLR.  If you’re up for a little challenge,  give a medium format lens a try.  You won’t be disappointed.

Filed Under: Gear Tagged With: Eric Chesak, full frame, medium format, Panorama

Infrared Haze Reduction

Did you ever wonder why IR landscape photos look so crispy sharp? It may not be obvious. But photographing in the near-infrared part of the spectrum has some definite benefits over photographing visible light, especially for landscape photography.

science-ahead-sm

Before we get into the photography portion, let’s take a look at some of the science involved.  You might have noticed that infrared light has some ability to penetrate the haze in the air. Why is that?  Haze is caused by light scattering off particles in the air. By shooting our photos in IR (longer wavelengths) we can take advantage of some science to reduce the haze that is apparent in our photos.

To help understand the scattering mechanisms, it’s important to understand what I mean by wavelength.  Sure it’s related to the color. But why?  Light is an electromagnetic wave.  All waves can be measured by frequency (like broadcast radio waves) or by wavelength (like light).  Frequency in inversely related to wavelength.  The color of light depends on the wavelength – the length of the light wave (see the graphic below). If you could see the waves you could measure the distance in order to obtain the wavelength.  But the wavelength of visible light is very small.  It’s measured in nanometers (billionth’s of a meter).  So you’ll need a pretty small ruler.  Around the visible light spectrum, the longer wavelengths are associated with orange, red and infrared and shorter wavelengths with blue, purple and ultraviolet light.

wave-sm

Visible light is only a small part of the entire electromagnetic spectrum (EM).  You can see where X-rays and gamma rays or microwaves and radio waves lie in the EM spectrum. visible-spectrum-766

Now that the science is out of the way, let’s dive into the photography.  What is it that makes the haze apparent in photographs?  It’s scattering.  But what’s causing the scattering and why?  When light hits objects it’s scattered.  Blow a little smoke in the air and shine a flashlight on it.  What you’re seeing is light being scattered by the smoke particles in the air.  But how light is scattered depends highly on the size of the media doing the scattering.

  • Non-selective scattering is a mechanism that occurs with larger particles (much larger than the wavelength of the light being scattered). This occurs mainly with larger water droplets, ice crystals and similarly sized atmospheric particles. This scattering occurs equally for all wavelengths. So shooting in IR doesn’t provide any benefit over traditional color photography.
  • Mie scattering is a scattering occurs with atmospheric particles that are approximately the same size as the wavelength being scattered. These particles are typically spherical in nature and are characterized by dust, pollen and water vapor (droplets). Although there is some wavelength dependence, typically all colors are scattered equally. As an example, clouds appear white since the water vapor is being scattered equally across all colors.
  • Rayleigh scattering is where the magic happens for IR photographers. This scattering occurs mainly on the molecular level, when the particles are much smaller than the wavelength of light being scattered. In the atmosphere this is primarily caused by oxygen and nitrogen molecules. These molecules absorb the light and re-emit it in a random direction, thus scattering the light. However, the amount of Rayleigh scattering that occurs is inversely proportional to the 4th-power of the wavelength. Knowing this, it is easy to see that infrared light (~800nm) is scattered 1/16 as much as blue light (~400nm).

At ground level, all 3 scattering mechanism can influence the production and of haze. As a result, the haze-penetrating benefits if IR photography are not as strong.  Even so, the effects are still quite evident.  Take a look at the photo below.  The top photo was shot with my cell phone and the bottom with full spectrum camera  and IR filter.

haze-penetration-color-766A scene photographed with a color camera showcasing the haze on the distant mountains.

haze-penetration-irThe same scene photographed at 740nm.  Notice the reduction in haze and improved detail.

I’ve seen the benefit of shooting IR landscapes for many years.  However, it is quite shocking to see the difference when shooting from an airplane.  There is less dust at higher altitudes.  This means that the majority of the scattering is done by Rayleigh scattering.  As we already learned, this scattering is highly wavelength dependent.  So the difference between visible and IR photos is much more dramatic.

hungary-766x

The photograph above is one I shot while flying over central Hungary.  It was the first time I’d shot any IR from an airplane.  The ground was heavily obscured by haze.  After I processed the IR image, I was surprised by the clarity and the appearance of the mountains on the horizon.  It’s a perfect example of the powerful haze reduction power of IR photography. This effect is what makes infrared aerial photography such a powerful tool for scientists and those requiring clear images of the ground.  Here are a couple of my color vs. IR comparison photos.

lake-766xVisible Light photo West of Austin Texas

_mg_8768-lake-766Same photo in 740nm IR

guad-766xVisible light photo of West Texas, including Guadalupe peak (upper RH side) and Salt Flats.

_mg_8829-bw-766xSame photo in 740nm IR. White Sands National Monument is visible in the upper LH portion of the image, more than 100 miles away.

Hopefully, you pulled something useful out of this blog.  But at the very least, I hope you see how IR photography can be used to reduce the haze in photographs. The advantage can be striking and significantly improve the clarity of IR images. I’ve done landscape photography for many years.  However, shooting in IR has allowed me to see landscapes in a totally different light (pun intended).

Filed Under: Tutorials Tagged With: aerial photography, black & white, Eric Chesak, haze reduction, Infrared, landscape photography, Rayleigh scattering, wavelength

Shooting Infrared Panoramas

Panoramas are the ideal tool for capturing scenes with expansive views or to increase the field of view of a lens. They are also a lot of fun to shoot. However with the excitement of shooting a panorama comes the frustrating reality of assembling the images into to single frame. Without some proper shooting techniques this assembly process can be hit or miss.

pano1-766

Anyone that has tried to assemble panoramas in Photoshop (or a similar image processing program) is aware that some aspects of the individual frames of the panorama don’t always match. This is caused by parallax errors of near and distant objects. With these images there is usually some compromise to the assembly. So some of the image parts will match and other parts may not. These parallax errors are most problematic where the scene has close foreground objects as well as distant objects in the background. So what can be done about this?

pano2-766
Well, lets first look at the problem, parallax. You can easily see the effects of parallax by setting up a couple objects on a counter top, in line with the camera. Place one object closer than the other and aligned so that the object closest to the camera hides the object further from the lens. Now pan the camera and notice the effect. This effect is seen because the camera is not being rotated around the optical node of the lens.

pano4-766
So how do we fix this problem?  The use of a nodal slide will allow you to rotate the camera around a predetermined nodal point.  The nodal slide allows you to offset the camera so that the axis of rotation is around the optical node (also known as the no-parallax-point or entrance pupil) of the camera lens. Not too many manufacturers publish this information, but it can be determined experimentally. The nodal point will differ from lens to lens and also at different zoom settings on the same lens.  To properly use a nodal slide you’ll need a tripod and a head with a panning base (or separate pan head).  Below is an example of a nodal slide set up with my 5DII.  Although I machined this one, they are available many places at very reasonable prices.  Note the blue tape that has information on the location of the nodal point for various lenses.  My preference is to use a set-up with an L-bracket attached to my camera.  However nodal slide set-ups can be done in many different ways, even on both axes (pan and tilt) for monster panoramas.

nodal-766
Another superb option for panoramas is to use the shift feature of a tilt-shift lens. In my opinion, this produces the best panoramas with the least trouble with assembly in Photoshop.  The shift feature of the lens is used to shift the image across the film plane, without movement of the camera itself. Using this technique there are imperceptible levels of parallax. I usually start by composing the scene keeping track of how far the shift feature will frame the field of view.  I then shift the lens to one side and begin shooting and shifting.  Use care to not move the camera or tripod.  One major downside is that a tilt-shift lens can be a pricey solution for panoramas. Another drawback is that it’s typically only possible to shoot up to 3 frames on a full frame camera and maybe 4 frames on an APS-C camera. The use of a nodal slide offers the potential for full 360 degree panoramas, if you ever have that need.

shift-pano-766

The shot below is a 3 frame panorama shot on a full spectrum modified 50D with a 740nm filter and my 24mm TS-E lens.  Without the use of a nodal slide, the parallax errors in this scene would have made assembling this image nearly impossible.  The front of the truck is close the camera and the buildings in the background being much further.  This would have created large parallax errors and made final image assembly very difficult.  But using the shift feature of the lens, the images stitched together without any trouble or compromises.

pano3-766
Now all this being said, there are some situations where panorama shots can be done without any equipment and can even be hand held.  But these are mainly done for distant scenic shots where there would be very few problems with parallax. Regardless of what method you use to shoot panoramas, it’s best to overlap the images. I usually shoot with at least 1/4 overlap. It’s also generally best to have the camera in portrait orientation.  This will provide the best set of images for final processing.

A tilted horizon can be fairly distracting on a large landscape panorama.  So for these I always try to insure that the nodal slide and panning or tripod head are level. Most tripods & nodal slides have a bubble level.  But for times where I don’t have a level and can’t see the horizon, I use a $5 hot shoe cube level on the camera.  This helps make sure that the horizon is not tilted in the final assembled panorama.  Correcting a tilted horizon on a large panorama requires cropping a large portion of the panorama.

Probably the most important point when shooting panoramas is to shoot in manual mode. With the camera in an auto-mode, the camera will typically detect a difference in exposure from shot to shot and adjust the camera accordingly. But having it in manual mode will insure the same exposure for all the shots. The final images will need considerably less work to assemble if there are no exposure variations.

I hope this brief overview has removed some of the mystery and has inspired you to get out and shoot panorama images.  I really enjoy shooting panoramas and hope you’ll give it a try. Practice makes perfect.  So get out and shoot!

Filed Under: Tutorials Tagged With: 5d Mk II, 5DII, Eric Chesak, Infrared, nodal slide, Panorama, tilt shift

Astrophotography Image Stacking – Astro Stacking

Hopefully you’ve been out shooting and applying what you’ve learned about astrophotography. For most there’s a fairly big learning curve with astrophotography. I was always pretty good with the computer, electronics, and the mechanical hardware, but learning to process the images was a huge challenge. Hopefully I can share what I’ve learned to help speed up your learning process.

CR-399-+-Garradd-flat-766

There’s a lot to learn when it comes to taking the images from the camera to making a final image for display. You’ll find that 99% of the deep sky images that you shoot will require some form of post-processing. But before we even discuss doing any processing, let’s discuss how to best shoot the scene.

In the previous blogs, I’ve hinted about a technique that will let you get the most out of your astro images. Shooting very faint moving targets can be pretty challenging. It takes fairly decent equipment to get the really faint stuff, but beyond this, it’s important to properly photograph the subjects. There is one valuable technique that will help tremendously with processing and make the most of your data. This technique is stacking.

Let’s take a look at stacking in very basic terms. Shooting faint targets makes for generally noisy images. This is true  for astrophotography as well as regular photography. This means that the photos look grainy and lack the silky smooth transition. In astrophotos, noise will disturb the transition from the target object to the dark regions. But if you shoot many photos of the same subject and stack them together, the result is far better than that of a single frame. The noise and graininess is filled in and the image will appear much smoother and complete. When I was going for the best quality images, I would generally shoot for between 10 and 20 hours of open shutter time. But again, these were for my very best deep sky images on professional level equipment. For me, that meant shooting over many nights and stacking all the data in the final image. I was shooting exposures that were ½ hour long,o I needed fewer frames. But the end result was a lot of data, that when assembled, resulted in very good data sets.

If you’re just starting out it’s not necessary for you to shoot this much. But generally the more you shoot the better. There’s a big difference that can be seen immediately in the final image. There is a point of diminishing returns, but most astrophotographers will never come close to this limit. So if you can start with shooting a couple hours you’ll end up with fairly decent data. But even shooting and stacking 10 images will be better than one single frame. The better the data, the easier it is to process into the final image.

How do we begin…?  Once you have your mount aligned (see my previous blogs) the target framed and the lens or telescope focused, you can start shooting your images. Shoot the same subject, over and over. I generally use a computer or an intervalometer to take the work out of this. This allows me the ability to walk away and let the camera shoot until it’s done. Just be aware that you may need several batteries or an AC adapter for your camera. This is especially true in the cold. For your first outing, try to shoot for at least an hour of open shutter time. That means if you’re shooting 5 minute shots you’re going to want 12 of these to make an hour. It’s generally best to shoot with an exposure as long as possible, but not so long that the image becomes saturated with light fog or you begin to get star trails. I generally tried to shoot until I reached about 25-75% on the camera’s histogram. But this depends on the target and from where I’m shooting (and how much light pollution is present). Just keep in mind that 1 hour is not a magical number. Shoot more, if you have time and patience. This will make the post-processing after the stack easier and the final image even smoother.

Once you have the stack, what’s next? You need to process all these images into a single image. This is possible in Photoshop and there are some really great videos and information on the topic. So I’ll leave this learning process to those interested in doing the stacking in this manner.

The real benefit is doing the stacking in a program that is meant for processing astrophotos. There are many programs that are available to do this, some are even available for free. I used a program called MaximDL which is a high-end piece of professional astrophotography processing software. In addition to doing some processing, it also handles camera control, filter wheel control, focusing, guiding and many other aspects of shooting deep sky images. In a complex setup, it’s very beneficial to have control of everything in a single piece of software. However for those just starting out, look at getting Deep Sky Stacker (DSS). It is an excellent stacking program and is available at no cost. This allows you to practice shooting and processing images without investing a lot of additional money in software.

Be sure to take a look at the excellent instructions on the DSS website and online. It is fairly powerful and capable producing nice images. It will also allow the addition of calibration frames (discussed below), which is another very powerful feature for noise control. I generally found that I liked doing the stacking in DSS and then doing the remainder of the processing in Photoshop or similar image processing program. But that’s totally my preference. Each photographer should investigate the best workflow and combination of programs to use to produce the final image.


Win a FREE Camera Conversion!

One really great feature of DSS is the comet stacking routine. Processing comets is even more complicated as the comet is typically in a different location in each frame. Some move slow enough not to have to worry about it. But others can move significant amounts in each frame. This typically takes some crafty processing to get a decent image. DSS takes a lot of the work out of it. This image was processed in DSS and Photoshop.

CR-399-+-Garradd-flat-766

Coat hanger Asterism (CR399)  and Comet Garradd

When beginning the stacking process, the images need to be quality sorted first and then aligned (or registered) first. The quality sorting can be done automatically in DSS, but I generally liked poking through the images and picking out the ones that were blurred from movement, or had clouds or planes. The registration or alignment will adjust the images up and down and also in rotation in order to bring all the frames in perfect alignment and then stack them together in one of several stacking methods. I generally prefer one of the median stacking methods.

Many of my astrophotos, including the comet photo above, were shot with professional level equipment. This equipment cost about half what my first house cost. To be fair I wanted to show what can be done with a DSR and Lens (or small telescope), so I re-processed some of my earliest images in DSS, knowing what I know now. These were shot with an Astro-modified, 6.3MP Canon 300D (Digital Rebel). This is one of the earliest DSLR’s. It was noisy and did not generally produce very clean astro images. But even with this old camera, the data was very usable and produced some fairly decent images.  We’ll take a look at a few of these below:

_MG_0758My first modified DSLR for Astrophotography

 

Stacking Examples

Here are some examples of images right out of the camera and also some processed images. The first is a single frame that shows the Heart & Soul nebula (IC1805, IC1871, NGC 869 and NGC 884) as well as the double cluster. The top is out of the camera the next is after stacking and processing.

Heart+soul-single-766Unprocessed, right out of the camera

Heart+soul stack-complete-766Stacked and post processed 

The difference in these is drastic.  In fairness, the single frame image was fogged by heavy light pollution.  But this is a problem that will plague the majority of astrophotographers.  The only way to combat this is to shoot from dark sites away from the city lights.

This next example is not as drastic. The top is out of the camera, the bottom is stacked and processed. Also included are crops of a single frame and stacked and processed images.

Rosette-CRW_1778x766

Rosette-crop-Single-Frame-Cropx766Single Frame and crop of the Rosette Nebula (NGC 2237)

Notice the missing details in the crop of this image.

Rosette-Processedx766

Rosette-crop-processed-Cropx766Stack/post processed image and crop of the Rosette Nebula

The stacked image makes is much cleaner and much of the missing data has been filled in.  Also note the better detail that is visible in the crop of the Rosette.  This is the real benefit of the stacking method.  One thing that you need to keep in mind with processing astrophotos is that it’s an incremental process. No single step is going to make a magical image from junk. Each step will add a tiny improvement, and with enough tiny steps you’ll end up with a very pleasing image. If you’re stacking many photos, most pieces of stacking software will take quite a while if you’re computer isn’t up to the task (like mine). So be patient and just let it run until it’s completed the registration and stacking processes.

Here’s another example of a single frame vs a stack.  This one is of the Horsehead Nebula (B33) in Orion.

B33-Single-framex766

B33-Single-frame-Cropx766Single Frame and crop

B-33-DSS-Stackx766

B-33-DSS-Stack-cropx766Stack/post processed image and crop

It’s fairly easy to see the benefit of stacking when shooting astrophotos.  One more advanced technique that will help reduce the noise in your stacks is called dithering. Basically this is moving the camera a couple pixels in a random direction after every frame. When using a median stacking method, objects in a different location on each frame will be eliminated. So using the stars as the alignment reference, the galaxies, nebulae or other subjects will remain in the same place. But hot pixels, satellites, planes, noise and other random effects will be in a different location, with respect to the stars, so these are eliminated when stacked. There are many guiding or tracking programs that will do dithering automatically. But even with a manual shutter release, it can help tremendously if you manually move the mount between exposures. It seems like a hassle, but dithering will add a fairly significant level of improvement. None of the images above (except the comet image) used dithering.

Another helpful addition is to add calibration frames. These will serve to help remove additional noise and other artifacts from the images. Dark frames will help remove hot pixels, Bias frames reduce read noise and flat frames will help clear up any dust spots or other specs that are caused by looking through the lens or telescope. There is a superb description of this in the FAQ section here. The newer more modern cameras tend to provide better noise and hot pixel control, so calibration might not be needed. But at the very least, flat frames should be used to ensure the removal of artifacts caused by a dirty lens or sensor. It will also help reduce any vignetting that occurs in the images. Remember: incremental improvements.

In the final installment of this Astrophotography series, we’ll discuss some of the details of going from a rough stacked image to the final image. This is where a lot of the magic happens so I hope you’ll stay tuned. In the meantime get out and shoot. See you soon.

Filed Under: Tutorials Tagged With: Astro modified, Astrophotography, Canon, Cluster, DSLR, Eric Chesak, full spectrum, Horse Head, Lifepixel, Nebula, Rosette

Blog Topics

  • Tutorials
  • Inspiration
  • Locations
  • Gear
  • News
  • Other/Misc

What our customers say:

Life Pixel has been a life-changer for me. Continue reading
Eric GConnecticut
Read more reviews
I am more than satisfied with the service I received Continue reading
Christopher JCanada
Read more reviews
I am more than pleased with the conversion job that you did Continue reading
LOUIS MWaco, Texas
Read more reviews
I am VERY impressed with his knowledge of your products and his patience in explaining it to a novice like me. Continue reading
Lynn FEssex, Maryland
Read more reviews
Things will only get better from here thanks to all of you. Continue reading
Phillip FRedding, California
Read more reviews
The Lifepixel newsletter reminds me of not having expressed my gratitude for the excellent job you made. Continue reading
Michael GGermany
Read more reviews
Your staff went above and beyond the call of duty to make sure everything was handled in an excellent manner. Continue reading
Paige RPearland, Texas
Read more reviews
It has given my old camera a new life, I will enjoy using it again. Continue reading
William PAustralia
Read more reviews
Excellent service and the camera works just great Continue reading
Hans FairhurstAustralia
Read more reviews
I just got my converted camera back from you guys. THANK YOU!!!! Continue reading
Anne CutlerForest Knolls, CA
Read more reviews
infrared_filter_choices_sidebar
ir_conversions_explained_sidebar
----------- Watch More Videos -----------
infrared_quick_start_guide

Ready to start shooting Infrared?

Convert My Camera

Subscribe To Our Newsletter

Receive updates, tips, cool tutorials, free stuff and special discounts.

Learn the Basics

  • Video
  • Start Here
  • Filter Choices
  • Focus Calibration
  • Lens Considerations
  • Camera Considerations
  • Place Your Conversion Order

Resources

  • Galleries
  • Lens Hot Spot Database
  • IR Tutorials
  • FAQ
  • Infrared Filter Choices
  • Infrared Photography Guide
  • Infrared DIY Tutorials

Company

  • Why Choose Us
  • Get In Contact
  • Job Openings
  • Write For Us
  • Affiliate Program
  • Terms of Use
  • Privacy Policy
  • Home
  • Start Here
  • Galleries
  • Tutorials
  • FAQ
  • Blog
  • Why Choose LifePixel?
  • Shop
  • Contact

Copyright © 2023 Life Pixel Infrared - All rights reserved - LifePixel Infrared Photography IR Conversion, Modification & Scratched Sensor Repair