Blob detection and measurement
Sections in this page
It is often useful to detect objects within images, so they may be measured. We use the term "blob" to mean a contiguous group of pixels that has been detected in an image. We do that to avoid confusion with the more generic "objects" in Java. In astrophotographs the blobs would typically be stars.
Detecting blobs proceeds in two stages in GRIP:
- First categorise every pixel in the image as either being of interest or not. The general term for this is "image segmentation". It results in a binary mask, a single bit for each each pixel, which we hold as a byte array. The fact that a byte array has more than 1 bit per pixel actually helps the processing of the next step, which temporarily puts numbers greater than 1 into the mask at certain pixels.
- Then scan the mask, tracing the boundaries of contiguous groups of pixels, optionally ignoring isolated single pixels which perhaps could be due to noise in some kinds of images. This produces a list of boundary points of each blob and further processing also provides a list of horizontal line segments comprising the whole area of the blob.
Image segmentation
Thresholding
In some simple kinds of image it is possible to select blobs on the basis of their brightness and/or colour. This might be possible in some astrophotographs if the background were uniformly dark (after flat-field division perhaps) and there is no nebulosity in the frame. So GRIP does provide this capability. The technique is called "thresholding" and it requires setting a range of grey levels in each channel of the image between which the pixels are deemed to be of interest, all others being background.
Class net.grelf.grip.Threshold holds a public array of net.grelf.grip.RangeInt objects, one per channel, each of which comprises a pair of public values, low and high. The values may either be set directly as numbers or through interaction with the user. For this interaction 2 classes are provided: net.grelf.grip.ImThreshDialogue for monochrome (1-channel) images and a similar ImThreshRGBDialogue for 3-channel images (assumed to be red, green, blue because that is a common case). The dialogues present a slider for each channel and a preview of a (user-draggable) part of the image.
So a snippet of code for interacting to set threshold of an RGB image might look like as follows. This is in fact a method in net.grelf.grip.ImProcess.
/** Display dialogue for user to set thresholds (RGB or monochrome),
* then threshold the image and return the Threshold set. */
public static net.grelf.grip.Threshold threshold (
net.grelf.grip.ImFrame imf)
{
net.grelf.grip.Im im = imf.getImPane ().getIm ();
int nBands = im.getNBands ();
switch (nBands)
{
case 1:
net.grelf.grip.Threshold thresh =
net.grelf.grip.ImThreshDialogue.askThreshold (imf);
if (null != thresh) // null if user cancelled
{
threshold (imf, thresh); // Another ImProcess method
}
return thresh;
case 3:
net.grelf.grip.Threshold threshRGB =
net.grelf.grip.ImThreshRGBDialogue.askThreshold (imf);
if (null != threshRGB)
{
threshold (imf, threshRGB);
}
return threshRGB;
default:
net.grelf.Util.message ("Sorry",
"Only 1-channel (monochrome) or\n" +
"3-channel (RGB) images can be\n" +
"thresholded interactively.");
}
return null;
} // threshold
If the user does not cancel the dialogue, the method threshold (imf, thresh) then sets the mask which is part of the GlassPane object contained in the ImFrame.
You can work at a lower level, without even displaying an ImFrame, because ImProcess also has the method
public static byte [][] threshold (java.awt.image.BufferedImage bim,
net.grelf.grip.Threshold thresh)
which returns the mask array to you directly. That is for 16-bit images or less. Corresponding methods exist for deeper images because the interface net.grelf.grip.Image also declares
byte [][] threshold (Threshold thresh);
Note this general feature of the GRIP API: shallow images, that can be held as java.awt.image.BufferedImage, have static processing methods in ImProcess but there are always corresponding instance methods in implementations of net.grelf.grip.Image for deeper images.
Star segmentation
Thresholding only works in very simple kinds of image. To segment interesting objects from realistic images it is usually necessary to devise a suitable algorithm. In GRIP there is one such special case, for segmenting stars against pretty much any kind of background: light pollution, arbitrary nebulae, optical vignetting and dust spots. The only assumption is that the stars are brighter than the background (so it is not a negative image). The algorithm was devised through our own research and so we will not go into details of how it works. It does not involve Threshold objects at all but it is tunable by 2 parameters which the user can set in the configuration menu of GRIP. We will denote them difference and radius. In practice we have not found a need to vary them from difference = 38 and radius = 16 but that may be because our images have all come from a 21-megapixel DSLR camera (but with a very wide range of focal lengths, from 15mm to 2400mm).
For segmenting stars from an image using our algorithm in your own Java code, there are several static methods available in class net.grelf.grip.StarSegmenter. Eg,
public static void segment (ImFrame imf, int difference, int radius)
which sets the mask in the ImFrame as in the thresholding example, or, at a lower level,
public static byte [][] segment (
ImageInt image, int difference, int radius)
which is for 32-bit images.
This grossly enlarged image shows how the StarSegmenter has detected one of the many stars in this photo of a star cluster (M11). The green dots indicate the detected boundary of the blob.
Detecting blobs in the mask
Whatever method of segmentation was used we should have a binary mask (byte [x][y]) covering the image, in which a value of 1 means the pixel at (x, y) is of interest and a value of 0 means the pixel is merely background.
Blob detection involves scanning the mask horizontally from (0, 0) until a 1 value is found, then tracing around contiguous 1 values until back to that first detected one, creating an object of type net.grelf.grip.Blob. The blob is then erased from the mask (pixels set to 0) and the scanning continues until another one is found. The end result is a list of blobs.
The high level method to use, if you have an ImFrame, is an instance method of the ImFrame:
public net.grelf.grip.BlobMeasList detectBlobs (
boolean includeSinglePixelBlobs)
That measures the blobs as well as detecting them, which we will discuss in the next section.
At a lower level proceed like this to get a simple list of (not yet measured) blobs:
byte [][] mask = //... from segmentation
net.grelf.grip.BlobMask blobMask = new net.grelf.grip.BlobMask ();
blobMask.setMask (mask);
java.util.List <net.grelf.grip.Blob> blobs =
blobMask.detectBlobs (includeSinglePixelBlobs);
// NB: Alters the mask
In both high and low level methods the mask is cleared to zero by the detection algorithm so you would have to segment again before repeating it but that is unlikely to be needed: you now have a list of all the blobs in the image. Each blob contains a full description of its shape.
Measuring the blobs
After blob detection we have a List <Blob> obtained from an object of type net.grelf.grip.Im (containing any depth of image, as seen on a previous page). A straightforward list of measurements of all the blobs can be obtained by
java.util.List <net.grelf.grip.BlobMeas> measures =
new java.util.ArrayList <net.grelf.grip.BlobMeas> ();
for (net.grelf.grip.Blob blob : blobs)
{
net.grelf.grip.BlobMeas meas = blob.measure (im);
// Parameter im is of type net.grelf.grip.Im
measures.add (meas);
}
The method blob.measure () scans all pixels in the blob by following the list of horizontal rows that describes the shape. The net.grelf.grip.BlobMeas is simply a record for holding measurements and also a reference back to the Blob object that has been measured. In a single pass through the list of rows the following measurements are recorded.
- Minimum and maximum x and y values (defining the bounding rectangle).
- Area, simply as a pixel count.
- Integrated brightness in each channel (simply summing grey levels at all pixels in the original image).
- Overall brightness, as the root sum of channel brightnesses (Euclidean distance in the RGB, or n-channel, colour cube).
- Centroid, weighted by root sum brightness of each pixel.
The measurements are all in units of pixels and grey levels. It is for other calibration software (eg, net.grelf.grip.Calibration) to scale to the appropriate units.
If the higher level ImFrame.detectBlobs () had been used, as in the previous section, the result would be a net.grelf.grip.BlobMeasList. That is a java.util.LinkedList <net.grelf.grip.BlobMeas> in which the BlobMeas objects are inserted in descending order of overall brightness. It is used in the batch process for combining images by matching patterns of star positions and brightnesses. You now know how those positions and brightnesses are obtained from each image.
The brightness measurement at this stage is not an accurate photometric one. That can be obtained by subsequent processing. See Blob.measureAccurately () in GRIP's API. It is important to appreciate that the outline of a star obtained by detectBlobs () is not good enough for photometry, only for establishing the position of the star. It is then necessary to construct rings around the star for doing photometry and that is indeed done in Blob.measureAccurately (). Here is an example as displayed by GRIP:
The cross hairs pass through the measured centroid of the blob/star. The more solid green dots are the boundary of the blob/star as originally segmented, detected and measured. The smaller dots outline two rings. The inner one is for measuring the integrated brightness of the star. The outer one measures the background level to be subtracted to get a true photometric result.