Human face conveys more information about identification, expression, and emotions of a person. In today's world every individual in the society wants to be more secure from unauthorized authentication. In order to provide more security, “Facial Recognition” has come into the picture and lead a most challenging role of detecting the face with more accurate results without any false identities. To increase the efficiency of the face recognition, histogram based facial recognition is chosen, where a face region is fragmented into a number of regions and histogram values are extracted and they are linked together into a single vector. This vector is compared for the similarities between the facial images and provides a most efficient outcome.
Human face conveys more information about the identification, expression, and emotions of a person. Humans can recognize one to one by their faces. Every human has a unique face identity and features, so every individual differs from each other. But how can a computer recognize an individual's face? This face recognition has become the most challenging aspect to detect a face more accurately ( Face Recognition, n.d.). Representation of the face is used for detecting a face by using successive algorithms, where the individual's face is represented as an input for the feature extraction process. Later in the second phase, that is feature extraction, it mainly concentrates on facial expressions and the emotions that are present on the face. Feature extraction results in solving various challenges, such as light variation, pose variations, facial expressions, emotions, and interclass similarities ( Face recognition using OpenCV and Python, n.d.). Extracting features play a more prominent role in recognizing a face because they are the key features for comparisons between the input and the images in the database. In the feature extraction process, histogram values are generated. Finally in the classification phase based on the histogram values for every pixel, a comparison is performed between the images in the database. This facial recognition is most widely used to provide security in many sectors like bank lockers, smart devices, surveillances, etc.
Open Source Computer Vision (OpenCV) is a popular computer vision library started by Intel in 1999. OpenCV is released under a BSD license. In 2008, Willow Garage took over support and OpenCV 2.3.1 now comes with a programming interface to C++, Python, and Java interfaces and supports Windows, Linux, Mac OS, iOS, and Android. OpenCV was designed for computational efficiency with a strong focus on real-time applications. Written in optimized C/C++, the librar y can take advantage of multi-core processing ( Local binary pattern with OpenCV and Python, n.d.). Enabled with Open Computing Language (OpenCL), it can take advantage of the hardware acceleration of the underlying heterogeneous computer platform ( Siswanto et al., 2014). The cross-platform library sets its focus on real-time image processing and includes patent - free implementations of the latest computer vision algorithms.
The existing system lacks in light variations and temporary changes so that the face is detected with false identities and less accurate results.
To avoid light variations in detecting a face, the authors have introduced Local Binary Pattern Histogram (LBPH) algorithm ( LBP algorithm, n.d.). In this algorithm, each image is analyzed independently and the histogram values are generated and stored for every pixel in the image in the feature extraction phase.
Considering the fact that they are many processes to identify a face, every process lacks in some feature and does not result in good outcomes ( Li and Jain, 2005). Most processes lack in detecting the face accurately due to light variations and temporary changes, pose variations, change in expressions, and emotions in the face. So LBPH algorithm is introduced to achieve the goal of recognizing the face with more efficient results. The principle used in face recognition is shown in Figure 1.
Figure 1. Face Recognition Principle
The face recognition principle involves three phases, namely Face Representation, Feature Extraction, and Classification. In face representation process, it checks whether a face is present in the image or not ( Arubas, 2013). If the face is detected, then it is represented using a square and it is passed as input for further process. From the detected face, the important features are extracted and histogram values are generated for every pixel on the detected face using neighboring pixel grey scale values. The histogram values are linked together to form a single vector and this vector is used in classifying the images based on the trained data.
Feature extraction is the process of determining and extracting the most useful areas for ever y image separately ( Local Binary pattern, n.d.). There are many algorithms to extract these features. Local Binary Pattern Histogram is one of the most efficient algorithms used to extract the features ( Local Binary patterns, n.d.). The detected face is divided into many divisions as shown in Figure 2.
Figure 2. Division of Regions
For every divided region, a histogram value is generated by using the neighboring threshold grey scale values as shown in Figure 3.
Figure 3. Generation of Histogram Value
The threshold histogram value is generated by considering the grey scale values of the middle pixel as pivot pixel and the pixels surrounding the pivot pixel are neighboring pixels ( Ahmad et al., 2013). In the next step if the grey scale value of neighboring pixel is greater than the pivot pixel, then it is represented as 1 otherwise 0. Finally, the generated binary numbers of neighboring pixels are taken in a circular order and the binary value is converted to decimal and it is stored in pivot pixel, which is a histogram value. These histogram values are calculated similarly for every pixel and they are fused together to form a single vector in a image as shown in Figure 4. The final vector image is used to compare between the images.
Figure 4. Single Histogram Vector
Detection of human face is not a simple task and it has become a major challenge to recognize the face accurately. To implement this face recognition algorithm, we need to know the requirements, algorithm, and flowchart.
The requirements for the face recognition are described below.
6.1.2 Hardware Requirements
Figure 5. Training of Dataset
1. Generate a dataset.
2. For each image in dataset
3. Generate histogram value for each pixel
4. End For
5. Find the highest histogram value from each image
6. Combine them into a single vector
7. Compare the vector for face recognition testing.
The flowchart for face recognition is shown in Figure 6.
Figure 6. Flowchart for Face Recognition
Recognizing a Face involves three phases of implementation, namely generate dataset, train the dataset, and face detection. Generation of dataset involves faces of the individual who needs to be identified for authentication purpose. These faces are detected automatically by using successive algorithms which are available as open source. After generating the dataset, the data is trained by using LBPH algorithm. Every image is divided into many regions and correspondingly, the histogram values are generated by using neighboring pixels of every region. These histogram values are combined to form a single vector of histogram and the highest histogram value is used to compare with other images to recognize the face. The input image for face recognition is shown in Figure 7 and the output is displayed in Figure 8.
Figure 7. Input Image
Figure 8. Output Image
Face Detection involves capture of the image from the webcam, which is sent as an input to the LBPH classification and the generated histogram value is used to compare with the trained dataset. Finally, the face is detected without false identities.
The advantages of histogram based face recognition are:
Face recognition has become a major challenging area in providing security. Many algorithms are present to recognize faces, but many resulted in false detections and errors. Many algorithms does not detect faces with light variations or any changes in face. Using LBPH algorithm for feature extraction has gained more accurate results in recognizing a face with light variations and temporary changes in face.