When considering the problem of disabled persons, a new concept known as Speech driven 3D face animation employing deep learning and neural networks has come into use. Nonverbal behaviour cues, such as facial expressions, are still intact and provide information about what we are thinking, doing or reacting to. When it comes to computerbased movies and digital games, expressive face animation is a must-have feature. It is necessary to get audio input from the user, after which the matching characteristics of the audio are extracted. Once the expressions have been analysed, they are combined with the intermediate 3D model to complete the process. The relevant result is generated with the assistance of the neural renderer. An overview of the whole implementation is presented in this paper.