Sign language recognition is a vital field within artificial intelligence, aiming to bridge communication gaps between deaf or hard-of-hearing communities and others by translating visual gestures into text or speech. Automatic Sign Language Recognition (ASLR) systems seek to interpret these complex and nuanced gestures with greater accuracy, expanding access to information communicated through sign language. This paper presents a comparative review of machine learning methods used in ASLR, emphasizing their impact on improving communication for hearing-impaired individuals. It also explores the primary challenges in ASLR, such as variability in signs and the complexities of gesture recognition. Advanced feature extraction techniques like SIFT, HOG, and SURF are examined for their role in enhancing ASLR system performance. Additionally, a bibliometric study highlights significant trends and advancements in intelligent systems for sign language recognition over the past two decades. This paper synthesizes recent research on ASLR technologies, supporting the development of more effective communication tools and fostering social inclusivity for deaf and hard-of-hearing communities.