Emotion, a representation of the human state of mind, plays an important role in day-to-day human life and helps one make good decisions. A typical way to understand human emotion is by observing a person's facial expressions and modulation of speech, and it can be categorized as sad, angry, happy, fearful, and so on. Emotion recognition using Brain Computer Interface (BCI) systems is beneficial for patients suffering from paralysis, autism, and mental retardation who cannot express their emotions like regular people. In this paper, after analyzing several data mining algorithms and various Neural Network models such as Convolution Neural Networks (CNN), Recurrent Neural Networks (RNN), and the Bi-directional RNN it has been proposed that Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) based emotion recognition using Electroencephalography (EEG) signals provides a better result. The main purpose of this paper is to introduce models which can work better than the existing ones on the K-EmoCon dataset. The metrics used in this paper are valence and arousal. The proposed RNN-LSTM model achieves a valence accuracy of 69.85% and an arousal accuracy of 45.07%. This model improves the accuracy of emotion detection on the K-EmoCon dataset. This approach achieves 4% more accuracy when compared to existing models such as the Convolution-augmented Transformer.