Rob Hawley's Pages
Almaden Observatory Main Page

Sample Workflow

I have been using Pixinsight since 2010.  In this page I present a typical workflow that I would use using some of my past work as examples.

With a tool like this it will always be possible to learn more. Anyone wanting more detailed information should consult the following sources
In addition, I attended a tutorial in Sept 2014 taught by Vicent Peris in Katonah, NY.  Some of the ideas herein are directly from that workshop. Since the material was copyrighted I have labeled places (Specifically color calibration of narrowband images) where the only source was this workshop. Other valuable lessons have since appeared in other sources as well. Hopefully I will not have crossed any boundaries. If you have an opportunity to attend one of these then it will be well worth it.

As I have learned more about Pixinsight it has become clear that mask generation is the key to successful.  There is a detailed discussion at the end of this page.



Generic Workflow


What is presented below should be treated as a generic set of steps to achieve a final image.  For a more detailed example on a recent (Nov 15) project see.

Raw data from the sky has noise, atmospheric distortion, light pollution, and shadows from dust bunnies.  The camera collects 65,635 levels of luminosity (mine does not collect color directly) but computer displays are only capable of displaying 256 levels.  The goal of processing is to remove as many of the factors that are obscuring the image as possible and then correctly select how the 65K original levels are mapped to the 256 display levels. The trick is reaching into the data to pull out the right 256 display levels for the interesting items in your image.



calibrate

Image Calibration

An image is collected as a number of individual exposures (sub exposures) with a particular filter.  These are calibrated by removing the effects of reading the CCD (bias), noise due to thermal effects (dark current), and defects in the optical path (e.g. a dust bunnies & vignetting) ("flat").  In early 2012 PI introduced a set of scripts that automates the calibration process.  It is now very easy to take your dark and bias masters and use them to calibrate your light images using the flats. 

While the PI script will also do registration and integration, I do not use this feature.  My images require many nights of data before they are complete.  I prefer to calibrate nightly or every couple of nights so I can get an idea of the quality of the data (and because I run several observing projects at the same time).

Since my camera now as several dead columns I also am including cosmetic correction in the calibration process.

When I have enough subexposures with enough quality I then build the linear image. The calibrated subexposures are registered (since you move the telescope between frames to improve noise rejection) and then combined.  This process includes a number of steps which further reduce the noise in the data.

Here are two examples.  Click on the images for a larger image. This will allow you to better see what the correction is doing.
raw image

raw
calibrated

calibrated




01a

Raw Image

This is what a typical image looks like after calibration.  At this point it is referred to as a linear image. A linear image is just what the camera captured.  Since the intensity level in the image data is simply divided by to the 256 to obtain the value for the display intensity, only the brightest stars are likely visible.  No worries your data is there hiding in the low values (along with the noise).
STF

Screen Transfer Function

This is the same image again, but we have used a process called "Stretching" to reassign the dimmer values to higher display values.  It does this in a way that is nonlinear.  In other words dimmer values are changed more than brighter values.

It is important to note that STF only stretched the displayed image.  The underlying data is still linear.
DBE demo

Removing Local Background

Broadband filters not only record the sky, they also record all of the junk light.  This needs to be removed by a gradient tool.  In PixInsight this is DynamicBackgroundExtraction.  The image at the left switches from a raw image, to the detected background, and then to the corrected image. In this image the gradient was not too bad, but you can see how much was there.  For filters that are very susceptible to light pollution such as a clear filter I find this needs to be applied to each sub exposure.
 

At Katonah we learned that it is important to place each of the sample points manually.  I suggest using a size of something like 10-15 pixels and then manually checking to make sure that each sampled area is really background and that it contains as few stars as possible.

Even my ultra narrowband filters can be subject to gradients.  The O III filter is especially susceptible.




Combining several filters for False (or mostly real) Color

Most of the tutorials for PixInsight are for conventional LRGB images. That is images with a Luminance (or Clear) filter that capture the visual intensity across the entire visible band and R, G, and B filters that capture the color contribution to the Luminance.

I don't shoot LRGB due to the local light pollution.  Either the NB4Stars or conventional narrowband is built using Channel Combination of the three images. Each filter is assigned a different color channel.  It is possible to also assign colors to non-primary colors (e.q. aqua) by applying the value to both green and blue.  Using PixelMath one can compose the image any way they want.

Neutralize the Background


Once the images are combined it is likely that the backgrounds of the three images are not set at the same level.  Use the Background Neutralization tool to fix this.  The image at this point is still going to look pretty ugly.  The next two steps will improve that.

Mask Generation

Many Pixinsight processes rely on masks.  Masks can be built from either the linear L copy saved above or by extracting the Lfrom the working image.  I discuss mask generation in more detail below



Color Calibration demo

Color Calibration

Color Calibration of LRGB images is covered in detail by multiple sources.  Even though I never have an L the technique that Vincent taught at Katonah still applies.

Create previews for the areas with the brightest nebulosity (or in the case of a galaxy the entire disk).  If there are multiple areas use the PreviewAggregator script to combine these into a single combined preview. 

The ColorCalibration tool then uses this as the white reference.  The preview used for the background in the previous step is the background.  Turn off structure detection.

This method has several advantages over what I was doing before.  Calibrating with a G2V star does not make sense since I typically image over a large part of the sky.  Sampling at a single point will not give me the average performance of the camera over the range of elevations I actually used.

The projects we used in examples at Katonah and my own recent M16 project showed that this was both simple and effective.  For my narrowband work it prevents Hydrogen from turning the image Green (as can be seen in the beforeimage on the left).  It also corrects for my camera being significantly less sensitive to my sV filter than to my other filters.
Fix star colors demo

Fix Star Colors

Narrowband images produce stars that have weird halos.  This is in part due to to the star sizes appearing to be different with different filters.

At the Katonah workshop we were taught to fix this by applying a star mask (see below) and then using Curves to adjust the Saturation so the stars were completely unsaturated.  In my work so far I found this worked on some images very well. 

The alternative is to extract the L channel from the working image and then overwrite this onto the color image by masking the color image and then using PixelMath to apply the L to the unmasked areas of the color image. This sets the unmasked areas to the intensity of L, but with no saturation (since the RGB values are the same).

In the example note particularly the bright star at the top of the frame.


Linear Noise Removal

Newer versions of the noise tools particularly TGVDenoise are able to be run on linear images.  If you are doing this be careful.  The defaults are intended for Non-linear images and thus make way too much correction for linear images

Calculating the true image location

Pixi includes a script "ImageSolver" that will analyze the image and set the FITS header.  This is important if you want to add annotation later.   In my experience this works better with linear images.

Extract the Linear L Channel


Extract the L channel (created internally after RGB Combination).  This may be useful in Mask Creation.
04 HT

Non Linear Stretch

Now it is time to dig into the data and find our image.  The Histogram Transformation tool uses a histogram display to reassign the old intensity values to new values.

By this point in the process the image is a 32 bit floating point so there is plenty of resolution to work with.

HT will reassign lower intensity values to higher values, but it does so non-linearly.  Thus after HT the original low intensity values occupy more of the numeric range than the original high intensity values did.

A trick you can use at this point is to use STF and then drag the triangle of the STF window onto the bottom of an HT window.  That will set the HT to the same stretching used in STF.  Be sure to hit PF12 to cancel STF on the image before applying the HT.

It is also possible to perform a Masked Stretch. This results in better star images, but cannot be directly used because it does not do enough for the nebulosity.  This can be fixed, but describing that process is out of scope for this summary. Consult Harryfor an example.

Image is now Non-linear
05 HDR

HDR Transform

HDRMT is another tool that is used to remap intensities.  HDRMT decomposes the image into layers and finds image structures as a function of their characteristic scales. Basically it forces the intensity levels to spread out and thus give more contrast.  Unlike curves it does this based on analysis of the image.

HDRMT will tend to dampen the bright areas of the image, but will improve the contrast between the dimmer details and bright stuff.  You can recover the brightness in the next step.
Color Saturation demo

Adjust Curves

We learned at Katonah (and later reinforced by Rogelio's walkthrough in IP4AP) it is important at this point to readjust the curves and saturation after HDRMT (and other steps below that flatten the image).  This will restore the contrast and make the colors brighter.  The CurvesTransform function can separately manipulate the L and the saturation.

These steps will require masks which is the subject of the next section.





Sharpening the Image

The MMT function is the principle tool for sharping (although there also some similar tools that can also be used). It detects the wavelets that make up the image and allows you to sharpen and remove noise at the same time.  One sharpens by increasing the bias of some sizes of structures.  The data in this example was very clean due to the number of exposures used, but the sharpening and re-emphasis of important structure is dramatic.

This is an example of one of the MMT operations required

MMT

It is important that you use a mask when using MMT.  At Katonah we learned to protect the stars from sharpening to prevent stars that have unnatural sharp edges.

Noise Reduction

While MMT handles noise in the brighter areas, I have not been satisfied with how it handles the background.  The older ACDNR still rocks when it comes to making the background as low in noise as possible, but I have started using TGVDenoise for more recent projects.  The latter's controls are sensitive, but when you get them right the results are amazing.

I usually apply different noise reduction to the background than I do to the brighter objects.  Create different masks to accomplish this.


06 curves


M17 in OSN

Final Adjustments

Some last minute adjustments are usually needed.  Some combination of these might be needed
  • more HT - to remap the image to remove pedestal (a constant black level) which sharpens the contrast between light and dark.
  • LocalHistogramEqualization - Emphasizes areas of sharply changing brightness.  A more powerful tool than Dark Structure Enhance.
  • Curves - can be used to further improve the contrast and improve color saturation.
  • Adding H alpha data Although I typically add this during the linear portion of the processing.
  • Annotating the image with overlays for NGC, etc objects visible


Mask Generation

Pixi supplies two built-in tools that will handle many cases

However we learned at Katonah (and later in IP4AP) that these tools are not the best ways to create star masks.  The improved method uses one of the wavelet routines (we used Muliscale LinearTransform at Katonah) to extract the stars from the image.  IP4APsuggests aggressively applying HDRMT to a copy of the image first.  This reduces some of the nebular features.  MLT can then select which waveform to include.  The size of stars of a particular size can be made larger using a bias greater than 1. Residual nebular areas can be reduced with noise reduction.

At this point you have a preliminary mask.  To create the final mask will require a series of steps that depends on what you want.  You can use the tools

Brightness masks can be based on the L and then adjusted to give less masking in the part of the image you want to manipulate.  For example, if you are only interested in sharpening the brightest features then apply a strong S curve that will saturate the bright areas (removing masking) but strongly reduce other areas (increasing the masking). Note in the discussion below I actually use the Preview masks generated by ACDNR as I find these are better.

You may also want to combine masks using PixelMath (e.g. to combine A_L mask with a star mask).  This is particularly useful to mask the stars from the bright areas before doing sharpening.

Again there is no magic formula.  You will have to experiment until you get want you want. IP4AP Part III gives a number of excellent examples.

Example of Mask Generation

Here are the actual masks I used in the SH171 project.  I found these particularly good so it is a suitable example to capture the process.  Note that this process is ad hoc and in one case uses a failed development path to create the next development path.

First the Luminance

I am basing most of my masks on a Luminance image extracted right after background neutralization.  At that point I have not done anything that is going to affect star size.  All of the channel are still true to their actual recorded data.  So before Color Calibration I extract a Luminance (A_L).  I then make a copy and perform a simple HT on it (A_nonlin_L).  This is the basis of what I am using below

Here are the masks I used for the final image

sm2

starmask

sm2 inverted
unmasked

unmasked


smallest mask


unmasked


Getting a good star mask will be essential in the next operations.  This is especially true in narrowband where I will be applying multipliers of as much a 5 or 10 to the Red and Blue channels.  A good star mask just covers the linear image.  Smaller and the edges of the star will not be protected resulting in purple rings.  Too large and the surrounding area will be affect resulting in grey rings.

I have found that it is best to build the star mask in stages.  The mask sm2 was the result of a Max on the following.  Some of the component masks were also used directly

smallest MLT of A_nonlin_L with bias of 1 and noise reduction on the first level
small
MLT of A_L and then HT
med_large
a starmask on A_L with (.01, .14, 1.0) clipping.  LSG=1,SSG=1, GC=2,smooth=10
med2
MLT of A_nonlin_L using 3.0 for layer 2


bright5



bright5 mask


unmasked

This mask is used on the nonlinear image and provides protection for the background and more protection for stars than sm2.  I find ACDNR builds a better brightness mask than RangeMask does.  I started with a base of an ACDNR based preview mask on an earlier version of the image processed through DSE.   I then invert so the previously masked areas are now exposed. From this we need to delete the stars.  For this the following are used

sm3
combine sm2 with large 2.  Large2 starts with a starmask LSG=2, SSG=0, smooth=18 of A_nonlin_L.  It then does a gentle MT to expand the stars followed by a Convolution to round the edges
large3
starts with large 2.  A further MT increase the sizes further.
med4
This starts with med2 and then applies MT and convolution until the mask is the correct size

I used PixelMath to set the areas permitted by the mask to 0. Since the starmasks are not binary some of the masked areas may not end up 0 giving the rounded edges.

shadows



shadows mask


unmasked


This is an ACDNR of a partially processed image as above.  The clipping set up to aggressively protect the midtones and bright areas.  Stars are already included and do not need to be separately masked.

C_cmM

All of these examples are displayed Zoomed by 2



color mask

star halos before removal


after

The final mask used was generated by the new ColorMask script.  This script creates a mask based on a range of colors.  Since narrowband images blow out the red and blue channels, imperfect masks of the stars will result in purple halos.  This tool will isolate purple (magenta) colors and create a mask for them.  This needs to be done carefully to avoid including nebula.  Fortunately, with the Hubble palette almost every place in the image contains an abundance of green.  Thus magenta is usually bad.  That was the case in SH171.  Once the halos were masked I reduced the saturation deciding to leave the ring a little magenta rather than  grey.

Copyrights for Photos


Creative Commons License

(c) 2014,2015 Robert J Hawley.  Some Rights Reserved. Except as noted, all work on this site
by Robert J. Hawley is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License. This permits the non commercial use of the material on this site, either in whole or in part, in other works provided that I am credited for the work.

11/15/15