Communication is imparting, sharing and conveying of information, new ideas and feelings. Of these, sign language is one of the non-verbal communication methods used by people with hearing impairment. People trained in this sign language can easily understand this sign code, but it is difficult for ordinary people to interpret it. This communication barrier is a key social problem among the hearing impaired community, preventing them from accessing basic and essential services. To tackle the difficulties faced, this paper proposes a methodology for the recognition of hand gestures, which is the prime component in sign language vocabulary, based on an efficient deep Convolutional Neural Network (CNN) architecture. CNN is an effective technique in extracting distinct features and classifying data. The hand gestures are captured using a camera connected to the Raspberry Pi. a single board computer that runs the deep learning algorithm. The algorithm is trained with the preprocessed datasets. The captured image is compared against the trained datasets and algorithm recognizes the corresponding alphabets. Later the alphabets are consolidated and returned as the word or sentence. The speaker is connected to the Raspberry Pi system through which the word output is converted into voice. In addition, the system also converts the voice input to text in the form of word or sentence. This approach provides an effective mode of communication for hearing impaired people.