The convolutions menu
Sections in this page
This menu provides filters which work by looking at a neighbourhood around every pixel in the image. The neighbourhood of a pixel is a square array, n x n. The size of the neighbourhood, n, is often specified in terms of a "half width" such that n = 2 x halfwidth + 1.
Three groups of filters are available:

Linear convolutions, in which a square (n x n) matrix of values called a "kernel" is applied to each neighbourhood. If the pixel being worked on is imagined to lie at the centre of the kernel, each matrix value in the kernel is multipled by the pixel value in the image which has the same relationship to the centre. The products are then summed. The sum is divided by the sum of all the kernel values (or 1 if they sum to zero), to scale it back into the range of the image. The scaled result is then written back to the original pixel position. This has to be written in a temporary (work) image so that it does not affect the neighbourhood sampling for the next pixel. When the whole image has been scanned, the work image is copied back into the original image and the work image is discarded. The process is linear because the calculation simply involves multiplications and additions. Linear convolutions are good because it is possible to set up different kernels very easily as text (.csv  commaseparatedvalue) files. GRIP can handle that, loading the kernel from a CSV file. On the other hand nonlinear convolutions can be more effective for pulling out particular structures in an image.
The Gaussian kernel (explained further below) can be used as a linear convolution for Gaussian blurring.  Nonlinear convolutions, in which an n x n neighbourhood of every pixel in the image is also processed but now the calculation involves more than just products and sums. This is usually a slower process than a linear convolution of the same neighbourhood size (n). A kernel is not relevant for a nonlinear convolution. Instead a hardcoded algorithm is applied at every pixel and in each band. For examples see the median, mean and variance filters below.
 Deconvolution provides a way of attempting to recover an original image after it has been blurred in some way, due either to imperfect optics or to some movement or even because the system was poorly focussed. It is important to know the point transfer function of the system which produced the image. In other words, we need to know how the system imaged a single point. No optical system is perfect, so the image of a point source will generally be smeared in some way at the detector. Often a good guess is that the point has been spread symmetrically in a bell shape  mathematically the profile is a Gaussian curve. In the case of astrophotography the stars are truly point sources (certainly as far as any amateur telescope or camera is concerned) and so we can directly see in our photos the result of the point transfer function of our camera or telescope on each star. The image of a bright star can therefore be used as a better guess than the Gaussian shape, as the function we require for the processing. Deconvolution uses the function shape as a kernel to apply to every pixel in the image in order to estimate the error in the imaging of a point at that pixel. The estimated error is then subtracted from the original image to get a better estimate of the true scene. In practice a fraction of the error is subtracted (in GRIP the fraction is called the deconvolution strength) and the process is repeated for several passes. This tends to make the process converge to a fixed goal rather than diverging and producing spurious artefacts. If the kernel, fraction and number of passes are chosen carefully (usually through trial and error) a much improved image can be the result. It has to be admitted though that the "improved" image is just one of many which could have produced the result which was captured by the camera. So the technique must be used with care. Having said that, this is the basic process by which vehicle registration numbers are made legible in photographs which have been smeared by the vehicles' motion.
In all of the menu options we talk about the halfwidth (h) of the kernel or pixel neighbourhood. The kernel needs to have an odd width and height so that the pixel being processed can be at its centre. So width and height of the n x n neighbourhood are given by the formula n = 2 x h + 1. Note that pixels at the edge of the image, within the halfwidth of the kernel, would not be properly processed so they are set to black. So the image gradually shrinks as you apply convolutions. Note also that there is only one kernel set in GRIP at any time, so setting it on one image frame will make it available for all open images. The same applies to the deconvolution settings (in Java terms, they are all static).
When processing coloured images each of the channels (colour bands) is processed completely independently of the others.
Nonlinear convolutions
 first asks for a "half width". If an integer value h is entered the filter will be looking at n x n neighbourhoods of every pixel, where n = 2.h + 1. In every such neighbourhood and for each channel the values are sorted and the middle value of the sorted list replaces the original pixel/channel value. This is a way of getting rid of extreme values, typically noise, from the image. However if there are very sharply focussed stars in the image they may be eliminated too. In that case the median filter provides a way of getting a flat field for correcting the background level of the original image.
 first asks for a "half width". If an integer value h is entered the filter will be looking at n x n neighbourhoods of every pixel, where n = 2.h + 1. In every such neighbourhood and for each channel the average value replaces the original pixel/channel value. This is a simpler way of getting rid of extreme values. It is less effective than the median filter but it is significantly faster because it doesn't involve sorting.
 first asks for a "half width". If an integer value h is entered the filter will be looking at n x n neighbourhoods of every pixel, where n = 2.h + 1. In every such neighbourhood and for each channel the variance of the values replaces the original pixel/channel value. This is a way of finding areas of change in the image. The bigger or more sudden the changes, the brighter the pixels in the resulting image. This can be useful for finding the locations of objects despite them being on a gradually varying background. The resulting image may be easier to threshold than a "background corrected" one in some circumstances. It is necessary to experiment to find out whether this filter is useful on a given type of image. It is not fast because it has to calculate the standard deviation of n x n pixels in each band around every pixel.
 finds the minimum and maximum level in each neighbourhood and sets the central pixel to whichever of those is nearest to its current value. This has the effect of making edges steeper. It can therefore help to clarify illdefined images such as those of planets.
 image combination menu. The easiest way to work is to first clone the image, apply the rank filter to the clone and then merge the result with the original image. replaces every pixel by its ranked position (ie, position in the sorted list) in the brightnesses of its neighbourhood. This will exaggerate subtle changes of brightness. It is best used by then merging the result proportionally with the original image  see the proportional addition and multiplication options on the
Linear convolutions

1,2,1,
2, 4,2,
1,2,1,
. Open a CSV file which contains one n x n kernel. GRIP will use the same kernel for all channels. The CSV file should be laid out as n rows of n values. Example:  sample from the image, symmetricaly around a pixel clicked with the mouse. The kernel will be different for each channel. This is suitable for images of stars, as discussed under the deconvolution section above.
 . Designed to pick out small bright objects (stars)  click the mouse on one, then GRIP finds its centre, width and height automatically and samples it as the kernel. This is probably the best option for setting the kernel to deconvolve star photos.
 computes a Gaussianprofile image to fit in the n x n size.
 displays a magnified view of the kernel as an image in its own frame. This is not suitable for kernels loaded from CSV files. The frame has its own menu, so you can process the kernel and even save it as an image. However, any processing done here does not affect the current kernel.
 applies the current kernel linearly to all pixels in the image.
Deconvolution
 as the fraction (from 0 to 1) of the estimated error at each pixel which is to be subtracted from the original image to get a better version. This is sometimes known in the literature as the relaxation parameter. Suggest starting with a value of about 0.2 to 0.3.
 determines how many passes of deconvolution will be done by the next option. This is not the same as selecting the next option the same number of times. Suggest starting with very few passes, to see how long it takes, but around 10 passes may be effective for astrophotographs.
 applies the current kernel to deconvolve the image, using the current deconvolution strength and number of passes. Van Cittert's method is implemented.