This paper presents the design and development of a mobile application, built using Flutter, that leverages object detection to enhance the lives of visually impaired individuals. The application addresses a crucial challenge faced by this community: the lack of real-time information about their surroundings. A solution is proposed that utilizes pre-trained machine learning models, potentially through TensorFlow Lite for on-device processing, to identify objects within the user's field of view as captured by the smartphone camera. The application goes beyond simple object recognition; detected objects are translated into natural language descriptions through text-to-speech functionality, providing crucial auditory cues about the environment. This real-time information stream empowers users to navigate their surroundings with greater confidence and independence. Accessibility is a core principle of this paper. The user interface will be designed for compatibility with screen readers, ensuring seamless interaction for users who rely on assistive technologies. Haptic feedback mechanisms will be incorporated to provide non-visual cues and enhance the user experience. The ultimate goal of this paper is to create a user-friendly and informative application that empowers visually impaired individuals to gain greater independence in their daily lives. The application has the potential to improve spatial awareness, foster a sense of security, and promote overall inclusion within society.