Volumetric Texture Segmentation

Volumetric Texture Segmentation by Discriminant Feature Selection and Multiresolution Classification.


Constantino Carlos Reyes-Aldasoro
Abhir Bhalerao

Test data, Matlab code and data sets and user manuals


Reyes-Aldasoro, C.C., and A. Bhalerao, Volumetric Texture Segmentation by Discriminant Feature Selection and Multiresolution Classification, IEEE Trans. on Medical Imaging (2007) Vol. 25, No. 1, pp. 1-14.
Reyes-Aldasoro, C.C., and A. Bhalerao, The Bhattacharyya space for feature selection and its application to texture segmentation, Pattern Recognition, (2006) Vol. 39, Issue 5, May 2006, pp. 812-826.




Abstract

Texture analysis in 2D has been well studied, but many 3D applications in Medical Imaging, Stratigraphy or Crystallography, would beneit from 3D analysis instead of the traditional, slice-by-slice approach. In this paper a Multiresolution Volumetric Texture Segmentation (M-VTS) algorithm is presented. The method extracts textural measurements from the Fourier domain of the data via subband filtering using an Orientation Pyramid [1]. A novel Bhattacharyya space, based on the Bhattacharyya distance is proposed for selecting the most discriminant measurements and producing a compact feature space. Each dimension of the feature space is used to form the lowest level of a Quad Tree. At the highest level of the tree, new positional features are added to improve the contiguity of the classification. The classified space is then projected to lower levels of the tree where a boundary refinement procedure is performed with a 3D equivalent of butterfly filters. The performance of M-VTS is tested in 2D by classifying a set of standard texture images. M-VTS yields lower misclassification rates than reported elsewhere [2], [3], [4]. The algorithm was tested in 3D with artificial isotropic data and three Magnetic Resonance Imaging sets of human knees with encouraging results. The regions segmented from the knees correspond to anatomical structures that could be used as a starting point for other measurements. By way of example, we demonstrate successful cartilage extraction.

Keywords: Volumetric texture, Filtering, Multiresolution, Texture Segmentation, Feature Selection


*I*M*P*O*R*T*A*N*T

Many of these routines were developed some time ago, that is many versions of Matlab have gone through. Some m-files were not available from Matlab at that time (like mode) and thus I created my own. Be careful as some routines may not be compatible with the most recent versions of Matlab.



Data and programs: everything is in matlab format.

For a short tutorial on how to use the programs click -> here <-
For a short tutorial on how to use Matlab from a technical perspective try the 5 exercises I used while teaching the CS403 course at Warwick: lab1, lab2, lab3, lab4, lab5. For an introductory view of Matlab go to the Tracking Neutrophils and follow the guide to Matlab.

Data

File
Description
Image
orient3dcube
Two oriented patterns of 64 x 32 x 64 elements each, with different frequency and orientation. The file contains two matrices:
data - 64x64x64 original data, Features can be extracted from here, and
dataFeats - two filtered features
Oriented patterns 3D
gauss3dcube
Two bi-variate Gaussian distributions of size 32 x 32 x 32 pixels with similar means and variances, each with dimensions 32 x 16x 32. This are considered as features for the classification process.

A mask for this data can be created by:
mask=ones(32,32,32);
mask(:,1:16,:)=2;
Gaussian distributed texture 3D
figure11f
Just one of the figures with different textures, the whole set, with training data if needed is available at Trygve Randen's webpage:

http://www.ux.uis.no/~tranden/
figure 11 f
mask
16 class mask for the figure above
mask
Phantom
Phantom Mask
A 64 x 64 x 64 phantom of artificial textures
mask


Main Programs

File
Description
mVts
Multi-resolution Volumetric Texture Segmentation main program
This program Classifies in a hierarchical methodology that climbs over a Quad Tree up to a desired level (levsP), classifies with extra positional features, and then propagates downwards with the boundaries filtered with Pyramidal butterfly filters
sopy
sopy transforms data into the Fourier Domain and then filters it with a Second Orientation Pyramid tessellation with truncated Gaussians in different frequency-orientation positions
sopy3d
3D version of the above
kmeans_b
k-means classifier

Other Programs
(subroutines)

File
Description
arrangeData
Rearrangement of data from several formats to 2D [RCL x numFeats]
RCL = number of Rows * number of Columns * number of Levels
numFeats = number of features
bhattaM
Bhattacharyya Measurement Calculation
expandu
Quad Tree expansion
gaussF
GAUSSF produces an N-dimensional gaussian function (N=1,2,3)
ndgauss_r.m A Gaussian function for the filters
cTessel.m Provides central Tesselation
qTessel.m Provides Quadrant Tesselation
cDeTessel.m Provides central De- Tesselation
qDeTessel.m Provides Quadrant De- Tesselation
im2colRed
Rearrange image blocks into columns with only some of the pixels
LBG.m
Linde Buzo Gray vector quantising algorithm
minDist
determines minimum distance and assign to a class
normData
normalises data by dividing over the std
normProb
Normal Probability calculation
padData
Padds data with the same values on the edges
pixvsn
compares value of pixel versus neighbours and change if necessary
reduceu
Quad Tree reduction
surfdat
Plots in several dimensions
surfSOP
Display 2 levels of the SOP results
vol2col
Rearrange volume blocks into columns in a sliding fashion
mode
Mode
setImod
subroutine for reduceu and expandu
prod3d
3D products
createButter
Creates the butterfly filters
unSupMask
for Unsupervised cases
hist2d
histogram of 2D signals




A Graphic Example of Subband filtering


A Graphic example


Results



Results 1



Original Images (from Randen)
http://www.ux.uis.no/~tranden/ Results
Classification with M-VTS
REsults
Boundaries imposed in the original images

REsults
Correctly classified pixels in white

REsults


REsults
REsults
REsults




bones

cartilage



Conclusions

A multiresolution algorithm based on Fourier domain filtering was presented for the classification of textures both for images and volumes. Textural measurements from the Fourier domain were extracted from 2D and 3D data through subband filtering with an Orientation Pyramid tessellation. Some of the measurements can be selected to form a new feature space and their selection is based on their discrimination powers obtained from a novel Bhattacharyya space. A multiresolution algorithm was shown to improve the classification of these feature spaces: Quad Trees were formed with the features as the lowest level. Once the classification is performed at the highest level of the tree, the class and boundary conditions of the elements are propagated down. A border refining method with pyramidal, volumetric butterfly filters is performed to regain spatial resolution. This refining technique outperforms a standard MRF approach. Further improvements are provided by the addition of coordinate dimensions – the PCE features – to the space at the highest level of the tree.

The algorithm presented was tested with benchmark images in 2D and with 3D data; artificial textures and MRI sets of human knees. In 2D M-VTS provided lower misclassification than other algorithms in the literature and good classification was obtained in 3D.

In the case of the MRI data, although the anatomical structures that were segmented are not perfect, what is important to note is that M-VTS exploits the textural characteristics of the data. The resulting segmentations of bone provide a good starting point for other techniques, such as deformable models, which are more sophisticated and require some initial conditions that could be provided by the results of M-VTS. If M-VTS is to be used for medical applications, extensive clinical validation is required and, in the case of MRI, the effects of inhomogeneities artifacts should be addressed. Furthermore, there is manual intervention in determining the number of classes, the size of the butterfly filters, the depth of the OP decomposition and the height of the used by the coarse-to-fine refinement. Further research might be focused in these areas.




Main References

  • Roland Wilson and Michael Spann, Image Segmentation and Uncertainty, John Wiley and Sons Inc., 1988.
  • P. Schroeter and J. Bigun, Hierarchical Image Segmentation by Multi-dimensional Clustering and Orientation-Adaptive Boundary Refinement, Pattern Recognition, vol. 28, no. 5, pages = 695-709, 1995.
  • A. Bhalerao and C.C. Reyes-Aldasoro, Selecting Discriminant Subbands for Texture Classification, Proceedings of Eurocast'03, February 2003, Canarias, Spain.

Tutorial



Filter the image figure11 with the SOP filtering up to the third level of the pyramid (21 features)

figure11SOP = sopy (figure11, 3);
[512x512x21] [512x512]
Quad Tree: reduce and expand through the Quad Tree

figure11_1 = reduceu (figure11);
[256x256] [512x512]

figure11_2 = reduceu (figure11_1);
[128x128] [256x256]

figure11_1b = expandu (figure11_2);
[256x256] [128x128]

Display 2 levels of the pyramid (2:14) in figure 1

surfSOP (figure11SOP (:,:,2:14), 1 );
SOP of the image

M-VTS


load gauss3dcube;
Data on the workspace should be 'gauss3dcube' a 32x32x32x2 matrix, 32x32x32 are the dimensions of
the data and 2 is the number of features.

mask =ones(32,32,32);mask(:,1:16,:)=2;
A 2-class mask of the same dimensions of the data
[gaussClass,error]=mVts(gauss3dcube,2,mask,3);
parameters passed to mVts:
  • gauss3dcube - data to be classified
  • 2 - number of classes
  • mask - mask to obtain misclassification
  • 2 - number of levels of the Quad Tree
  • ees were formed with the features as the lowest level. Once the classi
parameters returned
  • gaussClass - the classified data, it is a cell structure for every level of the Quad Tree, for the bottom level use gaussClass{1};
  • error (optional) - the percentage of correctly classified voxels = 1-misclassification. For the example above error = 0.9485

[gaussKmeans,errKm]= kmeans_b(2,mask,1,gauss3dcube);
Comparison with k-means (2 classes, same mask, one iteration, same data):

To visualise the results:
subplot(221);
surfdat(gaussClass{1}==1);
subplot(222);
surfdat(gaussClass{1}==2);
subplot(223);
surfdat(gaussKmeans==1);
subplot(224);
surfdat(gaussKmeans==2);
results


load orient3dcube;
Workspace contains:
data - a 64x64x64 matrix with the oriented patterns
dataML - two filtered features from the SOP
mask =ones(64,64,64);mask(:,1:32,:)=2;
Generation of a mask
[orienClass,error]=mVts(dataML,2,mask,3);
Same classification parameters as above,
error = 0.8825
orienres


feat9=sopy3d(data,1,9);
dataML(:,:,:,3)=feat9;
[orienClass,error]=mVts(dataML,2,mask,3);
To include more features in the classification process:

Obtain feature 9 [64x64x64], first level of the SOP Filtering
Append new feature into dataML
Reclassify with three features, two classes, up to level 3 of the Tree
error = 0.8910





Disclaimer

The data and information is provided for educational and informational purposes only, and is not intended for professional purposes. The authors shall not be liable for any errors or responsibility for the accuracy, completeness, or usefulness of any information, or method in the content, or for any actions taken in reliance thereon. If you find any errors or problems with the programs we should try to fix them but no promise is made.
All result images are copyright (c) C-C Reyes-Aldasoro, 2004 and should not be reproduced without the permission of the author.