H. Image Classifications

Counting WBC, the number of WBC in blood cell image is found. It is known that when the WBC is increased rapidly and in access amount then it causes leukemia. Compare the counting value with its normal value. If the number is much more than natural then it will be consider as leukemia. Based on this theory it can be classified the initial, normal and extreme stage.

III. SEGMENTATION ALGORITHMS

In this work we used two popular clustering algorithm, one is k-means and another is k-medoids. Both are partition based iterative method.

A. K-Means Segmentation

Clustering finding natural grouping among objects. K-means is a partitioning method that partition objects into k number groups. K-Means algorithm is an unsupervised grouping algorithm 9 that characterized the data focuses into different classes in view of their characteristics different from one another. This algorithm finds centroids for every cluster. It is an iterative process. Here Euclidean distance is used to find the distances of objects to make cluster. Different clusters form in each iteration and find the better one. If cluster quality improve then the previous center update. Updating center stops after a certain iteration when no changes found in cluster quality. Finally the update center used to form final cluster. With several iteration a perfect cluster is acquired. K-means algorithm contains the following steps 11

Algorithm: k-means

Input: m= number of clusters to be formed,

D= data set

Output: m clusters.

Steps:

(a) first randomly choice cluster centers as initial center.

(b) repeat

(c) calculate the distance between each data point and cluster centers using Euclidian distance;

(d) assign the data points to the nearest cluster center;

(e) recalculate the new cluster center;

(f) until no update found

The randomly selection of initial cluster centers k, can be change this algorithm efficiency. K-means algorithm can be run multiple times to abate this effect.

B. K-Medoids Segmentation

K-medoids means partitioning around medoids. K-medoids clustering algorithm which is slightly modified from the K-means algorithm. K-medoids has the useful important characteristic which centers are situated among the data point themselves 10. K-means and K-medoids both effort to reduce the squared-error but the K-medoids algorithm is more robust to noise than K-means algorithm. In K-medoids, data points are selected to be the medoids. A medoid can be defined as that object of a cluster, whose average not similar to all the objects in the cluster is minimal. The basic idea of this algorithm is to first compute the K representative objects which are called as medoids. After getting the set of medoids, each object of the data set is imputed to the closest medoid. That is, object is assign into cluster, when medoid is closer than any other medoid. K-medoids algorithm contains the following steps 12

Algorithm: K-Medoids

Input: m = number of clusters to be formed,

D = data set.

Output: m clusters set.

Steps:

(a) Initially select k random points from dataset D;

(b) repeat

(c) Associate each data points to the closest medoid by using any of the most common distance metrics;

(d) For each pair of non-selected object and selected object, calculate the total swapping cost;

(e) swap selected object with non-selected if swapping cost