<< Go back to Posts

DRAFT - Biometry and image processing





Theory

Several steps:

  • Getting the Region of Interest (ROI)
  • Image enhancement
  • Feature extraction
  • Storing / Matching

Getting the ROI

For any biometry, you need to isolate the attribute (eye, finger, palm, face) from the rest.

Most of the time, because acquisition is done in a controlled way, ROI is a rule based algorithm.

Hand Isolation

To capture a hand, the sensor is put in a black box, so when picture are done, the hand is “light” while the box is black.

Therefore, we need to find a threshold value, all above is foreground (the hand), all pixels bellow is background.

import cv2

img = cv2.imread("my_path_to_img.png", 0) # 0 for grayscale image

ker = 3
img_blured = cv2.blur(img, (ker, ker))


Bluring is necessary: in the hand, we can find dark spots that would affect the shape. By bluring, we smooth the color.

Hand Spectrum Calculation

In image processing, it is much easier to work on rectangular images than on arbitrary shapes. Therefore, we do not want to work on the full hand, we want to work on a square which is self contained within the palm.

The most common way to define this area is to look at the valley between the index and major and the valley between the anular and auricular.

Getting the ROI

Now that you know where are your fingers, you can compute the ROI

You draw a line between them.

  1. Find the two valley of interest and get their coordinates
  2. Draw a line between these two points, and get the middle point
  3. Compute the perpendicular vector, starting from the middle point.

With the vector and the distance between the two points, you can get the ROI

TODO

\[E_ROI = \alpha E_D\] \[E_W = \beta E_D\]

TODO: Do a .GIF of these steps

Checking the ROI

We can check that the ROI is correct. The ROI square should overlap the mask of the hand.

Image Enhancement

When ROI has been isolated, we can apply some filter to improve the image quality.

First, background homogeneization. Due to light condition, some parts might receive more light than others.


ker = 31 # A large odd value
background = cv2.blur(img, (ker, ker))
img1 = img - background

Contrast improvement.

The range of gray values is quite limited in the ROI. When displaying the image, it is difficult for us to see anything in it. Rather than having gray values between [150, 200], there are equalization filters that stretch gray values over all the scale.

The most basic one is histogram equalization. However, the quality could be better.

CLAHE paper

clahe = cv2.Clahe()
img_enhanced = clahe.apply(img1)
img_enhanced = cv2.rescale(0, 255).astype(np.uint8)

Image enhancement stop here !

Binarization

It can be interesting to binarize the enhanced image, to stop working with grayscale and only two class of pixels: foreground and background. Also, it can speedup the process / compress the information, as we work with binary data.

Otsu Thresholding

Try to minimize the weighted variance. Set a threshold t which minimize intra-class variance. This is a global threshold.

arg_min_t Var(t) = w0(t) x Var_0(t) + w_1(t) x Var_1(t)

Gaussian Adaptive Threshold

Rather than a global threshold for the whole image, adapt to local conditions. TODO

Vein filters

Their are two filters that can help:

  • Sato filter helps to detect tubular structure.
  • Frangi filters

Feature Extraction

For Veins recognition, their are several directions:

  • Local Binary Pattern
  • SIFT
  • Mean Curvature

SIFT will look at keypoints. It is sensitive to contrast. Because of that, it is preferable to apply it on the binarized images. (otherwise, we need to adjust the contrast parameters)

LBP will look at a patch, compare with a middle point, classify it

Mean curvature aims to detect the vessels locations. Thanks to that, we get a skeleton.



>> You can subscribe to my mailing list here for a monthly update. <<