Sign language is the primary mode of communication for more than 4.6 million people globally, yet communication barriers between signers and non-signers persist across essential domains such as healthcare, education, employment, and public services. Existing sign language recognition systems suffer from high latency, platform dependency, limited vocabulary, and poor generalization to real-world environments. This research presents a real-time American Sign Language (ASL) translation system integrating a Convolutional Neural Network (CNN) with MediaPipe Hand Tracking for robust and efficient gesture interpretation. The proposed model is trained and validated using the Kaggle American Sign Language Alphabet Dataset and the Kaggle Sign Language MNIST Dataset, enabling strong generalization and improved recognition stability. The system achieves sub-500 ms end-to-end latency and 95%+ accuracy across varying lighting conditions and user hand morphology. A Flutter front end ensures cross-platform deployment, while a hybrid edge-cloud architecture supports both online and offline operation. Experimental evaluation demonstrates improved performance compared to legacy solutions, positioning this work as a scalable and inclusive tool for bridging communication gaps between deaf individuals and the broader community.