Astroalign documentation¶
ASTROALIGN is a simple package that will try to register (align) two stellar astronomical images, especially when there is no WCS information available.
It does so by finding similar 3-point asterisms (triangles) in both images and estimating the affine transformation between them.
General registration routines try to match feature points, using corner detection routines to make the point correspondence. These generally fail for stellar astronomical images, since stars have very little stable structure and so, in general, indistinguishable from each other. Asterism matching is more robust, and closer to the human way of matching stellar images.
Astroalign can match images of very different field of view, point-spread functions, seeing and atmospheric conditions.
Warning
It may not work, or work with special care, on images of extended objects with few point-like sources or in very crowded fields.
Guide:¶
Installation¶
The easiest way to install is using pip:
pip install astroalign
This will install the latest stable version on PIPy.
If you want to use the latest development version from github, unpack or clone the repo on your local machine, change the directory to where setup.py is, and install using setuptools:
python setup.py install
or pip:
pip install -e .
Tutorial¶
A simple usage example¶
Suppose we have two images of about the same portion of the sky, and we would like to transform one of them to fit on top of the other one. Suppose we do not have WCS information, but we are confident that we could do it by eye, by matching some obvious asterisms on the two images.
In this particular use case, astroalign can be of great help to automatize the process.
After we load our images into numpy arrays, we simple choose one to be the source image and the other to be the target.
The usage for this simple most common case would be as follows:
>>> import astroalign as aa
>>> registered_image = aa.register(source, target)
registered_image
is now a transformed (numpy array) image of source
that will match pixel to pixel to target
.
If source
is a masked array, registered_image
will have a mask transformed
like source
with pixels outside the boundary masked with True
(read more in Working with masks).
Finding the transformation¶
In some cases it may be necessary to inspect first the transformation parameters before applying it,
or we may be interested only in a star to star correspondance between the images.
For those cases, we can use find_transform
.
find_transform
will return a scikit-image SimilarityTransform object that encapsulates the matrix transformation,
and the transformation parameters.
It will also return a tuple with two lists of star positions of source
and its corresponding ordered star postions on
the target
image.:
>>> transf, (source_list, target_list) = aa.find_transform(source, target)
source and target here can be either numpy arrays of the image pixels, or any iterable (x, y) pair, corresponding to a star position.
The transformation parameters can be found in transf.rotation
, transf.traslation
, transf.scale
and the transformation matrix in transf.params
.
If the transformation is satisfactory we can apply it to the image with apply_transform
.
Continuing our example:
>>> if transf.rotation > MIN_ROT:
... registered_image = aa.apply_transform(transf, source, target)
As a convenience, estimate_transform
and matrix_transform
from scikit-image are imported in astroalign as well.
See Module Methods for more information.
Module Methods¶
Working with masks¶
Sometimes, CCD defects can confuse the alignment algorithm because of misplaced star centroids, or fake point-like sources on the image. In those cases, you may want to mask those artifacts so they are not counted as control points.
The way to do so is to wrap your image in a numpy masked array:
>>> myarray = np.ma.array(myarray, mask=badpixelmask)
and mask bad pixels with True, following the numpy masked array convention.
You can now call astroalign methods in the usual way:
>>> import astroalign as aa
>>> registered_image = aa.register(myarray, target)
The type of the returned registered_image
wil be the same type as the input image, regardless of the type of target
.
That is, if the source image is a masked array, the output will also be a masked array, with the masked transformed in the same way as the source image and filled with True for pixels outside the boundary.