The GRiTs project(https://rickerg.com/projects/project/) considered how genes interact with each other
in space and time. Evaluating this the process begins by determining the border structure of the image. This is done manually using a drawing tool and outlining the border. The resulting border data points were captured and feed into a tool such as 3dMax to create a multidimensional shell. This allowed the image tool a reference point for aligning gene data points in the volume.
I have been interested in processes that would make this more automated. The GRiTS viewer tools were developed in C++ and QT. ImageJ was also used to pre-process the images to remove some of the extraneous information within the image.
OpenCV is a C++ library designed for various image processing and machine vision algorithms. Here is a sample image that I am using.
The first thing is that some of the images have color and some are gray scale. The colors are used in some cases to indicate specific piece of information. In this case I am looking for contours and for that a gray scale image works better.
The function cvtColor() will work to covert to gray scale.In the version of OpenCV I am using the call takes the enum CV_BGR2GRAY as the option. In later version I believe this has changed to COLOR_BGR2GRAY. The function takes a src and dest image along with the appropriate code.
There are a number of blur options available. I have started out with the basic normalized box filter blur. I have set the kernel size as 3×3 to start with.
This process removes unwanted values. But “unwanted” depends on the image and what you are trying to eliminate. In this case I am looking for pixels that make up the boundary. I don’t want to be harsh in removing values since this causes large gaps that are to fill. For this test I have selected the value to be 50 and the max value to be 250. Of course these values will change depending on the image. I suspect that this will require applying some statistics and machine learning to create “best guess” starting values. After all the goal is make this as automatic as possible.
threshold(src, dest, threshold_value, max_BINARY_value, THRESH_TOZERO);
After blur and threshold I applied an edge detection process. For this I used the Canny algorithm. The lowThresh and highThresh are used to define the threshold levels for hysteresis process. The edgeThresh is the window size for aperture size for the Sobel operator.
int edgeThresh = 3;
double lowThresh = 20;
double highThresh = 40;
Canny(src, dest, lowThresh , highThresh , edgeThresh );
The edge detection process creates a lot of segments. Contouring will try to connect some of the segments into longer pieces.
CV_RETR_EXTERNAL: find only outer contours
CV_CHAIN_APPROX_SIMPLE: compresses segments
Point(0, 0) : offset for contour shift.
Contours are stored in the contours variable, a vector<Vec4i>
findContours( edgeDest, contours, hierarchy,CV_RETR_EXTERNAL, , CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
I took the same processes used with OpenCV and implemented them with ImageJ. Using ImageJ is different then OpenCV. It is really designed so that the developer create plugins that the ImageJ tool can use. I expected to use ImageJ as a library, part of a bigger app.
This was used to create the above image. The raw image is on the left.
ImagePlus imp =IJ.openImage("Dcc29.jpg");
ImageProcessor rawip = rawimp.getProcessor();
rawip = rawip.resize(rawip.getWidth()/2,rawip.getHeight()/2);
ImageConverter improc = new ImageConverter(imp);
ImageProcessor ip = imp.getProcessor();
ip = ip.resize(ip.getWidth()/2,ip.getHeight()/2);
Scale over time
Another issue is scale. The complete set of images represent development over a period of time. In the beginning the image are small. By the end of the series they are considerably larger. Mapping positions on the cell images as they grow is still a challenge. Landmarks change over time, coming and going, so they can’t be counted on. The images are obtained at intervals which are relative close to each other. This means that points on one image will be close to others images in similar positions at similar time periods.
Consider the an image in the middle(image #20) at day 15. Points on this image should be close to points on a middle image at day 18.
By interpolating between images it may be possible to track point movement over a period of time.