Deciding which music to listen to from the huge collection of existing options is often confusing. Depending on the user's mood, multiple suggestion frames are available for topics such as music, food, and shopping. The main purpose of this music recommendation system is to provide users with suggestions based on users' tastes. By analysing the user's facial expressions and emotions, it is possible to understand the user's current emotional and psychological state. Music and video are areas where there is a great opportunity to offer customers a wide range of choices, taking into account customer preferences and recorded information. It is well known that people use facial expressions to more clearly expresses what they want to say and the context of the words. More than 60% of the users believe that the song library has too many songs at any given time to find the one that needs to be played. Developing a recommendation system could help users decide which music to listen to and reduce stress levels. Users do not have to waste time searching and searching for songs; it will recognize the track that best fits the user's mood and present the songs to the user according to the user's mood. Music plays a role in emotions, which in turn affects mood. Books, movies, and Television (TV) shows are a few more means, but unlike these, music conveys its message in pure moments. It can help us when people feel low. When people listen to sad songs, their mood tends to drop. When they listen to happy songs, it makes them feel happier. This music recommendation model will mainly work to improve the mood of the user by providing a detection track for the user's facial expression and recommending the preferred song according to the user's expression. User images are captured using webcams. A user's picture is taken, and depending on the user's mood or feeling, appropriate songs are displayed from the user's playlist to meet the user's requirements.