Segmentation Of Dual X-Ray Imaging

Read Complete Research Material



Segmentation of Dual X-Ray Imaging

Abstract

The contaminant detection process of an industrial product is an important stage of a modern production factory. The large demand of quality products has lead producers to use automated systems. One such system is the automated detection of defects/contaminants. An X-ray image of the product is taken and analysed by the system. The most important step of the process of inspecting that product is the segmentation of the image into meaningful objects (defects and normal product).

Introduction

Image segmentation is the first and the most important step in a contaminant/defect detection/inspection system used in the industry. Whereas such a system is used for detection of metallic or non-metallic contaminants (e.g. glass, bones and stones) or for detection of flaws or cracks), it usually involves some means of acquiring one or more images of the inspected product. The most important type of image used in commercial inspection systems is the X-ray image [1], [2]. The detection of defects has to be reliable and cost efficient while performed with high speed. Most segmentation methods currently rely on simple thresholding algorithms [3], [4], [5].

Literature Review

Classical approaches to X-ray image segmentation

A segmentation algorithm for a X-ray image needs to separate foreign objects (such as defects or contaminants) from the background (the normal product). One aims in separating not only entire objects from the background, but also separating only parts of objects from the background is also considered a successful technique. A simple thresholding of a the X-ray image would provide a useless result for further image analysis techniques. To illustrate this, an Otsu-based algorithm was implemented [4]. The results are depicted in Figure 1. The product (a chicken breast meat product) contains three easily visible defects (bones) (Figure 1 left). The idea under lying edge detection is the computation of a local derivative operator. The first derivative of an edge modelled in this manner is 0 in all regions of constant grey level and constant during a grey-level transition. The first derivative of an image is called gradient and it is defined as follows:

The computation of the gradient of an image consists in the determination of the partial derivatives at every pixel location (x,y). A 3 by 3 mask (or convolution kernel) was used. Thus, applying gradient edge detection is similar to the following convolution operation:

where Gradient kernel or mask is presented in Table 1.

Fig.1 Results of Otsu's thresholding method

Fig.2 Edge detection results using a) Prewitt; b)Sobel; c) Gradient; d)Laplace

The Laplacian edge enhancement technique produces sharper edge definition than most other techniques. Its main property is that it can highlight edges in all directions. The Laplacian of an image f(x,y) can be determined as follows:

An approximation of the Laplacian can be derived as:

The edge detection process was implemented using the following approximation:

where the Laplacian is defined in Table 1.

Sobel and Prewitt filters [1] were also tested on the X-ray images. Their implementation was done by using the kernels presented in Table 1. The difference with the ...