There are several obstacles when communicating with individuals who are deaf or hard of hearing, but sign language has become an essential tool. It is an excellent tool that helps individuals with speech and hearing impairments convey their ideas and feelings. This encourages less complicated integration with the outside environment. However because sign language has its own set of difficulties, its simple construction is insufficient. It might be difficult to interpret gestures for people who are not familiar with sign language or who speak a different sign language. Thankfully, new technological developments have introduced several methods for automating the recognition of sign movements, providing encouraging alternatives. This invention has the potential to close the long- standing communication gap considerably. This work presents the usage of an exclusive dataset to identify hand motions. The research project uses a webcam to allow users to take pictures of their hand gestures. The system's goal is to anticipate and show the name that corresponds to the image that was taken. Convolutional Neural Networks (CNNs) are used for image training and classification, with computer vision aiding in the image collection and capture process.