Visually impaired individuals face significant challenges in recognizing objects, people, and text in their surroundings, often limiting their autonomy and independence. This study focuses on developing smart glasses equipped with Raspberry Pi to facilitate real-time object and facial recognition, supported by audio feedback. Utilizing advanced machine learning algorithms such as YOLOv5 for object detection and Dlib for facial recognition, the system delivers auditory cues through text-to-speech technology, allowing users to navigate their environment with enhanced confidence. The prototype's evaluation demonstrates its accuracy and usability while also discussing potential improvements for future development.