This paper provides an overview of the phases, methods, and datasets used in modern Facial Emotion Recognition (FER). FER has been a crucial topic in computer vision and Machine Learning (ML) for decades. By using Convolutional Neural Networks (CNN) to recognize facial expressions, valuable insights into people's emotional states can be gained, leading to improved services such as personalized healthcare, enhanced customer service, and more effective marketing. Automated FER can be used in various settings, including healthcare, education, criminal investigations, and Human Robot Interface (HRI). The study includes a comparative analysis of the performance and conclusions of several models such as Visual Geometry Group 16 (VGG16), Residual Network 50 (ResNet50), MobileNet, Deep CNN and the proposed pretrained VGG 16 architecture. These models can be integrated into different systems for various purposes such as obtaining feedback on products, services, or virtual learning platforms. Ultimately, Facial Emotion Recognition using Convolutional Neural Networks (CNN) can help reduce bias in decision-making processes by providing an unbiased assessment of a person's emotional state.