This study presents an AI-based system designed to facilitate communication between hearing-impaired and hearing individuals by translating sign language gestures into spoken English. The proposed system recognizes and decodes sign language motions captured through video inputs by utilizing deep learning techniques, specifically Convolutional Neural Networks (CNNs). The input video is first processed by the system using a variety of CNN models that have been trained to extract features. After identification, the signs are translated into appropriate text, which is then converted into speech by a Text-to-Speech (TTS) engine. The model can recognize various hand shapes, movements, and facial expressions, which are essential for accurate sign language interpretation, after being trained on a large dataset of annotated sign language gestures. Due to its real-time operation, the technology provides an effective communication method for individuals with hearing impairments. This approach offers a feasible solution for improving accessibility in social interactions, healthcare, education, and customer service by significantly reducing the communication barrier between hearing-impaired and hearing individuals.