Asymmetric similarities in brain MR images

In this post I describe my latest paper, “Asymmetric similarity-weighted ensembles for image segmentation“, which I will present at the International Symposium on Biomedical Imaging in April.

We are teaching computers how to recognize things in brain MR (magnetic resonance) scans. In this paper, we want the computer to recognize different types of tissue, such as gray matter or white matter, in the brain. This is necessary to study how age and disease affect the brain.

To automatically quantify the amount of different tissues in a scan, we need a segmentation algorithm, which assigns each pixel in the scan to a different category, such as “gray matter” or “white matter”. A segmentation is an image of the same size as the original, but reduced to a few colors, with each color indicating a different category. Here is an example:

tissuesegmentation Original image (left) and its segmentation (right) into four categories: background (black), gray matter, white matter and cerebrospinal fluid.
Image source: van Opbroek et al, Transfer Learning Improves Supervised Image Segmentation Across Imaging Protocols , IEEE Transactions on Medical Imaging, 2015

If an algorithm is given a new images, how does it know which pixel belongs to which category? It learns this from examples: other images, which have been segmented by experts. An important condition is that at least some of these examples have to be similar to the new image. This is not always the case if, for example, the images have been made with different scanners. For accurate segmentation, we need to find the most similar examples, and emphasize them to the algorithm.

My paper focuses on how to determine the similarity of images. This is a bit similar to comparing groups of numbers: if the new image is described by intensities {0, 1, 1}, how similar is it to example {0, 1, 2}? This depends on how you define similarity. Consider a similarity where each number from the first group has to be matched to any number from the second group, and the similarity is simply the number of exact matches. The similarity of {0, 1, 1} to {0, 1, 2} is therefore 3: 0 matches to 0, and both 1’s match to the 1 in the second group. Notice that this similarity is asymmetric, because with this definition, the similarity of {0, 1, 2} to {0, 1, 1} is equal to 2. We could also choose to symmetrize the similarity by averaging the two directions.

So, when comparing a new image to the example images, the direction in which similarity is measured may change which examples are most similar, and affect the accuracy of the segmentation. In experiments with 56 images from 4 different scanners, I showed that comparing the new image to the examples is better than doing it the other way round, and better than averaging the directions. This is an important finding because asymmetric similarities are often averaged by default, which may not be the best thing to do. This finding is not limited to brain tissue segmentation, and I am currently doing experiments on other data where asymmetric similarities play a role.

That’s it! This was my first attempt at writing about a publication in a blog post, so I would be happy to hear your comments. Is it easy to follow? Is there enough detail? Do I need to show the results, or is it enough to describe the conclusions as I did here?

Leave a Reply