Facial Recognition Technology for Identifying Missing Individuals and Wanted Criminals

Fast and accurate user identification and verification is always desirable. Face recognition, which is machine recognition of a person's face by analyzing patterns on facial features, is becoming important for security and validation. Less interaction from users contributes high enrolment as well as easily applicable for current technology further adds its importance. In this regard, we propose a Multi-tasked Convolutional Neural Network (CNN) based face recognition technique previously done with eigenfaces but CNN has better accuracy. This project proposes to use this technology for identifying criminals who are on the run from their previous records. An NCRB (National Crime Records Bureau) report shows that 70% of crimes are repeatedly committed by the same criminals. These criminals can be identified by the face recognition from an image or video frame which is captured by the cameras which are installed in various locations and it can also be used for identifying missing children. This system will decrease crimes and ensure security in our society.


INTRODUCTION
In an era characterized by rapid technological advancements, the use of facial recognition technology has emerged as a powerful tool in addressing some of the most pressing challenges faced by law enforcement agencies and society as a whole.One of the most crucial applications of this technology lies in the realm of locating missing persons and identifying criminals.Our research project is dedicated to developing an age-invariant face recognition system that leverages the Multi-task Cascaded Convolutional Networks (MTCNN) algorithm for face detection, ultimately contributing to the resolution of cases involving missing individuals and criminal investigations.Our research project places great emphasis on the utilization of the Multi-task Cascaded Convolutional Networks (MTCNN) algorithm for face detection.MTCNN, a state-of-the-art deep learning technique, has demonstrated exceptional capabilities in locating faces within images, regardless of variations in age, pose, or lighting conditions.By employing the MTCNN algorithm as the initial stage of our face recognition system, we establish a robust foundation for subsequent processes, including age-invariant feature extraction and matching.The ability to recognize and match faces consistently over time is a fundamental requirement when it comes to locating missing persons and identifying criminals.Faces undergo natural transformations as individuals age, which can be significant obstacles for traditional face recognition systems.Criminals may alter their appearances deliberately, and missing persons, particularly those who have been absent for an extended period, might look substantially different from their last known images.
In this context, age-invariant face recognition is of paramount importance.It enables the system to maintain its accuracy and reliability in the face of age-related changes, ensuring that valuable leads are not lost due to the passage of time.

METHODOLOGIES
In this section, we outline our methodology for conducting a comprehensive survey of the existing literature.Our goal is to provide a thorough analysis of the current state of research in this field.To achieve this, we have employed a systematic approach that includes a comprehensive review of reputable databases, journals, and conferences.We have set specific inclusion and exclusion criteria to ensure that the selected papers are relevant and significant.The collected data has been rigorously analyzed, focusing on identifying common trends, patterns, and recurring themes in the research.Additionally, we have taken an interdisciplinary approach by considering findings from diverse areas.Our methodology aims to maintain objectivity and impartiality in our survey.In the subsequent sections, we will present the results of our analysis and discuss the key findings from the surveyed literature.In this paper [1], authors develop two new strategies to improve the performance of cascaded CNN for face detection.First, they propose the inside cascaded structure (ICS) that constructs cascaded layers classified inside a CNN to reject negative samples layer wise.It encourages deeper layers and lower layers to reject easy non-faces quickly.In particular, we propose the data routing training approach to end-toend train ICS.In addition to ICS, they propose to jointly optimize body part localization and face detection in a two-stream contextual CNN to improve the robustness of our model.Finally, develop a unified framework to combine these two components which achieve the competitive performance on the challenging FDDB and WIDER FACE face detection benchmarks while keeping real time performance.In [2], The experiments illustrate that face recognition systems based on MTCNN and LBP features can achieve high recognition rate on embedded devices in limited data.Neural network greatly optimizes the accuracy and stability of the system.Meanwhile, the high extraction speed of LBP further increases the speed of the system.The recognition rate of the system is 96.6% under the circumstance of there are 58 users in the face database, and the recognition speed is 8 fps when recognizing a single face and 5 fps when recognizing three faces simultaneously.Obviously, embedded face recognition systems have advantages of easy carrying, high speed and accuracy.The paper [3] compares Haar Cascade, known for its speed but limited robustness in detecting tilted or partially covered faces, with DNN, which can handle various face orientations and occlusions.DNN is found to provide better performance, especially in challenging scenarios.To address the challenge of tilted or partially covered faces, the paper proposes an additional preprocessing step that includes scaling and alignment using facial landmarks.This ensures a frontal view, which can improve recognition accuracy.The deep neural network-based face detection offers significant advantages over the Haar Cascade, particularly in terms of accuracy, robustness, and the ability to handle various face orientations and occlusions.The additional preprocessing step aids in improving alignment for recognition.The paper [4] presents an innovative approach to rapidly and accurately aligning facial landmarks using an ensemble of regression trees.The method achieves a remarkable one-millisecond processing time while maintaining high precision, making it valuable for real-time applications in computer vision and facial analysis.The paper's open-source implementation further encourages its adoption and experimentation within the research community.The authors employ a cascaded framework in which multiple regression trees are used in sequence.Each tree refines the landmark positions based on the output of the previous tree.The cascade enables progressive improvement in landmark alignment, allowing for more accurate results with each step.The methodology in [5],Authos The implemented high-precision facial recognition system using FaceNet involves a systematic approach.The following steps outline the process:Data Collection:The first phase of our methodology revolves around the collection of a diverse dataset comprising images of the individuals of interest.This dataset serves as the foundational corpus upon which our model will be trained.Face Detection and Alignment: Accurate facial recognition hinges on the quality of preprocessing.Leveraging OpenCV in conjunction with the Multi-task Cascaded Convolutional Neural Networks (MTCNN), we employ state-of-the-art techniques for face detection, alignment, and detection.MTCNN excels in accurately recognizing and aligning faces, ensuring optimal preparatory steps for subsequent processing.Embedding Faces with FaceNet:With a pre-trained FaceNet model at our disposal, faces are represented in a 128-dimensional Euclidean space.This embedding space encapsulates the unique facial characteristics of each individual, forming a robust basis for accurate recognition.Creating Face Embedding Database:An integral step in our methodology involves the systematic accumulation of these embeddings in a structured database.Each set of embeddings is linked with the respective individual's name, establishing a comprehensive database that serves as the cornerstone of the recognition system.In [6], The authors enhance the effectiveness of MIA by integrating pre-trained deep generative models, a concept referred to as Deep MIA.This integration strengthens the alertness to potential privacy breaches.The authors optimize the efficiency of Deep MIA through two key strategies: (i) integration of an efficiently pre-trained deep face generator, denoted as α-GAN, and (ii) initializing MIA gradient descent iterations with a face-based seed vector.These steps collectively enhance the attack's effectiveness.The authors introduce the concept of search domain transfer to narrow the MIA search space efficiently.Instead of operating in the general image space, the focus is on face feature vectors.This is achieved by integrating a deep face generator into the MIA gradient descent iteration loop.The authors validate the ability of MIA to generate natural face image clones for a target individual from a CNNbased face recognition system, operating under the semi-white-box scenario.The authors assess the impact of deep generative models on MIA, particularly under the conditions of the semi-white-box scenario.In [7], The aforementioned methodology provides a comprehensive overview of the approach taken in this study by the Authors for utilizing CNNs in face detection for criminal identification.This system exhibits potential in enhancing public safety, particularly in areas with heightened crime activity.The incorporation of CNN technology, renowned for its effectiveness in image identification, underpins the system's capabilities with the Authors adopted a multi-step approach to achieve their objectives.The Authors utilized a sequential model comprising two Conv2D layers with filter size 32 and kernel size 3 for feature extraction.This was followed by two-dimensional Max-pooling layers to further refine the extracted features.The Rectified Linear Unit (ReLU) activation function was employed in both the convolution and pooling layers to introduce non-linearity and enhance the learning capabilities of the model.The convolution results were then flattened and fed into three dense layers of 512, 128, and 5 units, respectively.The first two dense layers utilized the ReLU activation function for classifying classes.The ADAM optimizer was employed in conjunction with the Categorical Cross Entropy as the log loss function to fine-tune the model.The Authors extracted relevant information, including predicted names, from a MySQL database.This information was crucial in the final stages of their system's operation.In [8], The authors have proposed a novel face recognition system designed specifically for criminal investigations.This system is adept at identifying suspects through the comparison of live images, still pictures, and video frames procured from diverse sources.To implement this system, the authors have leveraged two prominent libraries, namely Dlib and Haar Cascade, renowned for their effectiveness.Additionally, the authors have integrated a Convolutional Neural Network (CNN) architecture as the facial recognition algorithm.This CNN model has been meticulously trained on a dataset of criminal facial images to discern distinctive features that differentiate a criminal's visage from a non-criminal.The results of the study are presented by evaluating several key metrics, including execution time, confidence level, accuracy, and image quality.Furthermore, a comparative analysis of the Haar Cascade and Dlib libraries has been performed.The numerical outcomes from the simulations indicate that the Haar Cascade library exhibits a higher level of effectiveness in comparison to the Dlib library.In [9], The authors have proposed a system for the classification of individuals, emphasizing the criticality of accurately classifying criminals or suspects.An erroneous classification, particularly in the case of criminals or suspects, can yield grave consequences.Primarily, facial images are categorized based on facial expressions and age.Consequently, neutral images and those of elderly individuals and children were excluded from consideration.The authors attempted to mitigate potential biases by utilizing Haar Cascade for facial cropping, but it was observed that this had a limited impact on the results.To address this limitation, the authors suggest future endeavors in expanding the dataset, accounting for the diverse factors mentioned above, and exploring the incorporation of additional personality traits or features into the classification process.In [10], this study presents a novel face recognition system for criminal investigation.The proposed system identifies suspects by comparing live image, still picture and video obtained from various sources.RFIS is developed by using two prominent libraries called Dlib and Haar Cascade because of their effectiveness.In addition, RFIS uses Convolutional Neural Network (CNN) architecture as the facial recognition algorithm which is trained on a dataset of criminal facial images to learn the features that distinguish a criminal's face from non-criminal.We express the results of our study by considering execution time, confidence, accuracy and quality of image.Furthermore, a comparison of Haar Cascade and Dlib library is conducted.Numerical results obtained from simulations suggest that Haar Cascade library is more effective than Dlib library.Notable benefits of this study are increased reliability and accuracy in facial recognition systems which is achieved through combining live picture recognition, still picture recognition and video image recognition.In [11],The authors have proposed the development of an online platform tailored to serve the needs of law enforcement agencies, ensuring seamless and effective real-time human recognition.This platform will boast several key features, including the capability to a) input, modify, and remove user and criminal profiles, and b) conduct searches for criminals based on their images via a live camera feed, subsequently identifying them.The proposed approach for facial recognition entails the utilization and extension of the Haar Cascade algorithm, a widely acknowledged and efficient algorithm for real-time applications.For criminal identification, the system will rely on image pattern recognition, comparing the provided image of a suspect with snapshots of identified faces extracted from the live camera feed.To enhance usability, the platform will offer a live feed section equipped with an array of video filters.This empowers users to select the most suitable filter based on the specific environmental conditions, thereby optimizing the recognition process.In [12],The Saveetha School of Engineering conducted the study.There are two sample groups in this study: CNN Classifiers and Unsupervised Machine Learning.There are 400 samples in each group, and their pretest value is 0.8.The segmentation dataset comes from vehicle retail using Kaggle 2000 records and comments from many companies and study sites.This is helpful for identifying various face recognition from diverse angles and for training a real-time image [10].Taking pictures and storing them is beneficial.Lastly, utilize Python 3 and apply the code in the Anaconda prompt.The criminal's image, or capture image, is captured and saved in an image file for this dataset.A dataset for unsupervised machine learning is employed.In [13],The system searches an image or video feed for areas with faces in order to identify a specific person.In this instance, faces are detected before identification using the Haar Cascade frontal face detection model, which is an xml file.When a face is identified, the system places it in an orientation and position that is conventional, usually with the mouth, nose, and eyes in the same relative positions.Keep in mind that this comes after the image has been converted to grayscale.When a face is detected, the system gathers important facial traits, like the separation between the eyes, nose shape, and lip curvature.The system stores a unique representation of the face created using these features.In order to find a match, face matching involves comparing the extracted features to a database of recognized faces.This could entail comparing the features to one image, or to several images taken in various lighting and angle configurations.In [14], In this research work, the authors target Membership Inference Attacks (MIA) on a deep face recognition system operating under a semi-white-box scenario.In this scenario, the structural details and learning parameters of the recognition model are available, but any data regarding the targeted user identities remains inaccessible.The comparative analysis conducted here focuses on MIA techniques under the same conditions of the semi-white-box scenario applied to a deep model.The authors have proposed an MIA technique termed α-GAN MIA initialized by a face-based seed (α-GAN-MIA-FS), which is pitted against a standalone MIA on a CNN-based deep face recognition system.Additionally, it is compared with α-GAN-MIA in scenarios where the initialization is randomized.Furthermore, a comparison is made with MIA integrated with DC-GAN.This comparison with randomly initialized α-GAN-MIA serves to underscore the pivotal role played by the initialization process in deep MIA on a face recognition system, achieved through a face-based seed image.The comparison with DC-GAN integrated MIA underscores the impact of appropriately training the pre-trained face deep generative model on the efficiency of deep MIA on a deep face recognition system.These comparisons have been conducted with both DC-GAN and α-GAN face generators being well-tuned.In [15],The paper suggests an identity conditional block (ICB) to achieve identity-level aging/rejuvenation pattern and a weightsharing strategy to enhance the age smoothness of synthesized faces in order to address these problems brought about by one-hot encoding.To be more precise, the suggested ICB learns an identity-level aging/rejuvenation pattern by using the identity-related feature from AFD as input.Subsequently, as illustrated it suggests a weights-sharing technique to enhance the age smoothness of synthesized faces by sharing some convolutional filters among nearby age groups.This theory is based on the observation that faces progressively change over time, allowing shared filters to identify common patterns of aging and rejuvenation among nearby age groups.Take note that in order to lower the computational cost, Xid is reduced from 512 to 128 using 1 × 1 convolutions.Researchers in [16] provide a unique Decorrelated Adversarial Learning (DAL) method based on the linear feature factorization to regularize the learning of deconstructed features.In this way, we want to gather the age-invariant, ID-preserving properties for AIFR.To the best of our knowledge, AIFR has never before used decorrelated adversarial feature learning.Researchers in [17], the survey provides a comprehensive overview of methodologies and algorithms for achieving real-time and accurate facial recognition.The evolution from Haar features to Convolutional Neural Networks (CNNs) in face detection highlights the pursuit of higher accuracy despite increased computational costs.Notably, modern detectors predominantly rely on CNN architectures, reflecting ongoing research efforts to optimize accuracy and computational efficiency.

ANALYSIS
The synthesis and analysis section of a survey paper offers a concise summary of surveyed literature, highlighting common trends, evaluating the pros and cons of methods, and suggesting future research directions.
In this section, we will review research spanning the past two decades related to the topic.We will examine common trends, patterns, and recurring themes in the surveyed research papers, evaluate the strengths and weaknesses of the employed methodologies, explore the interconnections between the surveyed research papers, discuss the practical implications of the findings and methods in real-world scenarios, and consider potential avenues for future exploration.The years 2000-2010 marked a significant epoch for face recognition systems, witnessing notable progress and challenges.This period saw the emergence of increasingly sophisticated technology, capable of intricate facial feature analysis for identity verification.The introduction of initiatives like the Face Recognition Vendor Tests (FRVT) by the National Institute of Standards and Technology (NIST) and the Face Recognition Grand Challenge (FRGC) played pivotal roles in advancing the field.The FRVT provided crucial evaluations of commercial facial recognition systems, yielding vital insights for law enforcement and government agencies, while the FRGC, inaugurated in 2006, spurred progress by evaluating the latest face recognition algorithms, resulting in substantial gains in accuracy.One of the most striking advancements was the remarkable increase in the accuracy of face recognition systems.This surge could be attributed to the refinement of algorithms, adoption of advanced machine learning techniques, and the integration of neural networks.The leap in accuracy was quantified, underscoring the substantial improvement over algorithms developed in earlier years.However, amid these advancements, a host of challenges persisted.Sensitivity to lighting conditions emerged as a primary concern, with variations in lighting potentially impacting the accuracy of recognition.Variability in pose and viewpoint presented another significant challenge.Accurately identifying individuals became more complex when their orientation or angle changed.Additionally, the presence of obstructions or occlusions on a person's face posed a hurdle, emphasizing the need for unobstructed views for accurate identification.Handling aging and facial changes over time was identified as a critical challenge.As individuals' appearances evolved, the system had to adapt to ensure accurate recognition.The quality and resolution of images emerged as pivotal factors.Higher quality images were imperative for precise recognition, while lower quality images presented difficulties.As databases grew in size, scalability became a concern.Efficiently managing large volumes of data was paramount for the system's performance.Moreover, privacy and legal concerns associated with facial recognition technology warranted careful consideration, underscoring the ethical dimensions of its deployment.In conclusion, the years 2000-2010 witnessed a dynamic interplay between technological advancement and persisting challenges in the realm of face recognition systems.The period bore witness to unprecedented progress, with heightened accuracy and widespread adoption.However, the challenges highlighted the imperative for ongoing research and development, ensuring that the technology continues to evolve responsibly and ethically.
The decade spanning 2010 to earliest,witnessed a substantial evolution in face recognition systems, marked by discernible trends and notable research directions.During this period, the preeminent trend that emerged was the widespread adoption of deep learning techniques, with a particular emphasis on Convolutional Neural Networks (CNNs).This shift towards deep learning underscored its efficacy in achieving remarkable levels of accuracy in face recognition.Researchers increasingly viewed face recognition as a pivotal biometric identification tool, leveraging the distinct facial features of individuals for reliable differentiation.This emphasis on biometrics played a vital role in the development of robust face recognition systems.The applications of face recognition systems diversified significantly, finding utility across a broad spectrum of domains.These ranged from everyday applications like smartphone unlocking mechanisms to critical areas such as criminal identification and home security systems.This versatility and potential for diverse applications became a prominent focal point in research endeavors.The project's [3] comparative evaluation of Haar, modified Haar, and OpenCV DNN modules revealed distinctive strengths and trade-offs.DNN exhibition of superior accuracy and face detection capabilities, albeit at a slower processing speed compared to Haar.On the other hand, Haar demonstrated commendable processing speed but fell short in face detection where DNN excelled.Notably, there were substantial disparities in precision, accuracy, and F1 values, particularly in recall metrics, where the difference in average values was a notable 9%.This comprehensive assessment provides valuable insights for optimizing latest face recognition system implementations.Researchers dedicated substantial efforts towards enhancing the accuracy of face recognition systems.Techniques like feature extraction, landmark detection, and ensemble learning were explored to optimize recognition performance [11].Additionally, benchmark datasets such as Labeled Faces in the Wild (LFW) were widely adopted for evaluating and comparing different methodologies, providing a standardized framework for assessing the performance of face recognition algorithms.Privacy concerns and ethical considerations loomed large in the discourse surrounding face recognition systems.Researchers were acutely aware of the need to address issues related to data protection, informed consent, and potential biases within the algorithms.These considerations were essential in ensuring the responsible development and deployment of face recognition technology.In terms of methodologies, the decade saw a dichotomy between constrained and unconstrained face recognition.Constrained recognition, conducted in controlled environments, demonstrated higher accuracy, while unconstrained recognition faced challenges due to variations in lighting, pose, and expression.The choice between 2D and 3D face recognition also emerged as a critical consideration.While 2D recognition offered deployability, it was susceptible to pose and lighting variations.Conversely, 3D recognition, capitalizing on depth information, exhibited superior accuracy and robustness, albeit necessitating specialized hardware.Looking ahead, future research directions are poised to focus on several key areas.Deep learning techniques, especially CNNs, are expected to undergo continued refinement to achieve even higher levels of accuracy and resilience in face recognition systems.Cross-domain recognition, capable of performing reliably across diverse environments, will be a critical frontier.Addressing privacy concerns and fortifying the security of face recognition systems will remain paramount.Furthermore, research efforts will target real-time processing, large-scale competency, and resilience against adversarial attacks.The integration of different biometric modalities, such as facial features, voice, and gait, will be explored to enhance recognition accuracy.Lastly, ethical considerations, including bias mitigation and countering potential misuse, will guide the ethical development and deployment of face recognition systems.In summation, the period 2020s constituted a pivotal decade for face recognition systems, characterized by significant advancements and a forward-looking research agenda.The convergence of deep learning, biometric emphasis, diverse applications, accuracy enhancement, rigorous evaluation practices, and ethical considerations set the stage for a promising future in face recognition technology.

PROPOSED SOLUTION
Based on the analysis of existing systems, we propose to develop an application that uses the MTCNN algorithm for face detection, Keras for feature extraction and cosine vector to compare images within a database.This enhancement builds upon previous solutions, as we not only make it an age variant but also increase its accuracy by making it suitable to external accessories such as specs, hats or changing of face expressions, lightning.We are using an open source face recognition dataset to train the model.The dataset includes photos of missing persons and known criminals.Collected images are preprocessed by normalizing them for lighting and color consistency.To enhance age invariance, we include an age estimation component within the system.This helps in estimating the age of the recognized person and matching it with the age range of the missing person or criminal.The MTCNN algorithm is implemented for face detection.MTCNN is known for its accuracy in detecting faces in images.Facial landmark detection algorithm is used to locate key facial features.This is crucial for aligning and normalizing faces for recognition.Age-invariant features from the detected and landmarked faces are extracted using Keras.Real-time image processing to capture, preprocess, detect, and recognize faces from live camera feeds or images is implemented.An alerting system is developed to notify law enforcement or relevant authorities when a match is found.

CONCLUSION
The survey paper on CNN-based (Convolutional Neural Network) face recognition offers a thorough summary of the most recent developments, approaches, and methods in the industry.Deep learning techniques have led to notable improvements in the accuracy of CNN-based facial recognition.These algorithms are now very good at identifying faces in a variety of settings, such as lighting, emotions, and stance.Numerous practical uses, such as security, access management, human-computer interaction, and customized user experiences, are possible with CNN-based facial recognition.For many applications, its dependability and resilience have made it the best option.In conclusion, CNN-based facial recognition has advanced significantly in the last several years and has a plethora of possible uses.But the correct application of this technology is crucial, since it must respect private rights and ethical principles.Face recognition technology will probably continue to change as a result of the continuous study and advancements in this area, becoming even more dependable, safe, and socially conscious.