SMART_EYE: A NAVIGATION AND OBSTACLE DETECTION FOR VISUALLY IMPAIRED PEOPLE THROUGH SMART APP

the purpose of this study was to devise an efficient solution, known as SMART_EYE, aimed at assisting visually impaired individuals in navigating unfamiliar environments and detecting obstacles. The motivation behind this research stemmed from the significant population affected by vision impairment and the limitations of existing navigation alternatives, which are often heavy and expensive and thus infrequently adopted. To address this problem, we employed a method that utilized smart applications with AI and sensor technology. The smart app captured and classified images, while obstacle detection was performed using ultrasonic sensors. Voice commands were used to provide users with real-time information about obstacles in their path. The results of this study demonstrated the effectiveness of the SMART_EYE model in improving both qualitative and quantitative performance measures for visually impaired individuals. The model offered a cost-effective alternative that enabled independent navigation and obstacle detection, thereby enhancing the quality of life for this population. The practical implications of this study are twofold: the SMART_EYE model provides a viable solution for visually impaired individuals to navigate unfamiliar environments, and its cost-effectiveness addresses the limitations of existing assistive devices. From a theoretical perspective, this study contributes to the field of assistive technology by integrating AI and sensor technology in a smart app to aid visually impaired individuals. Furthermore, the evaluation and ranking of different systems based on their impact on the lives of visually impaired people provide a basis for further research and development in this area. In conclusion, this study's value lies in its contribution to both theory and practice, showcasing the potential for future advancements in assistive technology for visually impaired individuals, ultimately improving their quality of life.


Introduction
Vision is an essential aspect of human life, providing us with the ability to perceive and interact with the world around us. It enables us to navigate through various environments, recognize objects, and interpret visual information. However, vision impairment is a prevalent global issue, affecting a significant proportion of the population. According to the World Health Organization (WHO), over 283 million people worldwide suffer from sight problems, including 39 million blind individuals and 228 million with low vision. (Andrius Budrionis, 2022) The loss or impairment of vision can have profound consequences, impacting an individual's independence, mobility, and overall quality of life.
For individuals with visual disabilities, navigating unfamiliar environments can be particularly challenging (Iskander, 2022). The ability to move around independently and safely is crucial for their well-being and participation in society. While various navigation alternatives have been developed over the years to assist visually impaired individuals, they often face limitations that hinder their widespread adoption and effectiveness.
The existing landscape of navigation alternatives for visually impaired individuals is characterized by devices and systems that are either too heavy or too expensive for universal use. These limitations pose significant barriers, preventing individuals with visual disabilities from accessing the necessary assistance and support they require for efficient navigation (Real, 2019). Many available devices are bulky and burdensome to carry, imposing physical and practical challenges for the user. Additionally, the high costs associated with these devices make them inaccessible to a substantial portion of the target population, further exacerbating the problem.

Literature Survey
The Literature Survey of the research highlights various studies and approaches that have been proposed to aid visually impaired individuals in navigation and obstacle detection. The studies use different technologies such as sensor-based and computer vision, ultrasonic sensors, IR sensors, IoT, and deep learning algorithms to develop assistive devices. These assistive devices aim to improve indoor and outdoor mobility and create a functional system for individuals with visual impairments, including those who are blind. The studies use different feedback mechanisms such as auditory commands, vibrations, and object identification to provide directional information and obstacle detection. Elmannai W. M., (2018) proposes a data fusion framework for guiding visually impaired individuals. The framework combines data from various sensors such as ultrasonic sensors, depth sensors, and cameras to accurately detect obstacles and provide directional information. The proposed framework also employs machine learning algorithms to enhance the accuracy of obstacle detection and to classify the type of obstacle. The authors conducted experiments to validate the accuracy and reliability of the framework, and the results demonstrate a significant improvement in obstacle detection compared to existing systems. The proposed framework has the potential to enhance the mobility of visually impaired individuals and improve their quality of life. Limitations with the above study is: The approach is incapable of showing indoor and outdoor coverage area of obstacles, as well as a directional facility which lacks the ability to recognize longer objects such as doors, walls, and so on.
According to Shah (2006) presents a study on a novel sensory direction model for the visually impaired. The system uses ultrasonic sensors to detect obstacles and direction and transmits this information to the user through vibrations of varying intensity and patterns on a handle. The handle is designed to provide feedback to the user based on the vibration's intensity, sensor position, and signal pulse length. The study involved 15 visually impaired individuals of different age groups who were blindfolded and tested in various navigation scenarios. The device was found to be flexible, lightweight, and ergonomically designed to fit different hand sizes. However, one limitation of this technique is that it cannot be connected to a camera for image processing, which restricts its ability to detect information about crosswalks and traffic signals in the outdoor environment.
Many electronic devices that aid people with vision loss use information gathered from the environment and provide feedback through touchable or auditory signals. However, the preferred feedback form is still a matter of debate, and opinions vary among individuals. Despite this (Elmannai W. &., 2017), there are certain essential components that any electronic system assisting blind or visually impaired people must have to ensure its reliability and usefulness. These characteristics can be used to evaluate the dependability and effectiveness of the system. However, a drawback of some of these systems is that they may have difficulty detecting objects at certain ranges and are limited to static object detection rather than dynamic object detection.
Islam (2019) reviews the development of walking assistants for visually impaired people and discusses recent innovative technologies in this field, along with their merits and demerits. The review aims to draw a schema for upcoming development in the field of sensors, computer vision, and smartphone-based walking assistants. The goal is to provide a basis for different researchers to develop walking assistants that ensure the movability and safety of visually impaired people. Ponnada (2018) presents a prototype of mobility recognition using feature vector identification and sensor computed processor Arduino chips to assist visually challenged people in recognizing staircases and manholes. The prototype provides more independence to the sightless people while walking on the roads and helps them pass through without any assistance. The model is developed using an Arduino kit and a low-weight stick to recognize obstacles, with the chip programmed and embedded in the stick to detect manholes and staircases using a bivariate Gaussian mixture model and speeded up robust features algorithm for feature extraction.
The developed model shows an accuracy of around 90% for manhole detection and 88% for staircase detection. Ahmad (2018) proposes a model-based state-feedback control strategy for a multi-sensor obstacle detection system in a smart cane. The accuracy of the sensors and actuator positions is critical to ensuring correct signals are sent to the user. Low-cost sensors can result in false alerts due to noise and erratic readings. The proposed approach uses a linear quadratic regulator-based controller and dynamic feedback compensators to minimize false alerts and improve accuracy. Real-time experiments showed significant improvements in error reductions compared to conventional methods.
According to Bai (2018) presents a novel wearable navigation device to assist visually impaired people in navigating indoor environments safely and efficiently. The proposed device consists of essential components such as locating, way-finding, route following, and obstacle avoiding modules. The authors propose a novel scheme that utilizes a dynamic subgoal selecting strategy to guide users to their destination while avoiding obstacles in a complex, changeable, and possibly dynamic indoor environment. The navigation system is deployed on a pair of wearable optical see-through glasses for ease of use, and it has been tested on a collection of individuals and found effective for indoor navigation tasks. The device's sensors are of low cost, small volume, and easy integration, making it suitable as a wearable consumer device.
The study (Tiponut, 2010) focuses on electronic travel aids (ETAs) developed using sensor technology and signal processing to improve the movement of visually impaired people (VIPs) in constantly changing environments. Despite efforts to create effective ETAs, VIPs still rely on traditional aids like white canes and guide dogs. The study proposes an ETA tool with an Obstacles Detection System (ODS) and a Man-Machine Interface, inspired by the visual system of locusts and flies. However, the tool has limitations in identifying obstacle labels and providing auditory navigation for blind users. Chaitali M. Patil (2016) propose a system framework that can help remove communication barriers for people with visual, auditory, and speech disabilities, allowing them to communicate with each other and non-disabled individuals using various modes of communication such as American Sign Language, audio, braille, and regular text. The system aims to improve the individual's capacity and desire to convey and transmit messages. However, the system lacks an effective prototype using the latest technologies.
The paper (Bhasha, 2020) proposes a new smart cane for visually impaired people (VIP) which can detect obstacles, water, and light environments in front of, and to the left and right of, the user. The smart cane is constructed using an Arduino Mega 2560 Microcontroller, Ultrasonic Sensors, Light Sensor, and Soil Moisture Sensor. The device generates an audio feedback signal to the user if any obstacle or water or light environment is present in their walking path. The proposed smart cane is more affordable than existing electronic sticks, and the VIPs find it comfortable to use because it is very familiar like a traditional stick. The detection algorithms used in this paper are simple and efficient for detecting the obstacles, water, and light environment of the user's path, thus helping the user to travel independently from source to destination. Ran (2004) states that few visually impaired assistance aids can provide lively communications and responsiveness to the user and that even if such a system existed, it would likely be sophisticated and not take into account the demands of a blind person, such as simplicity, ease of use, and less complexity. Patil (2018) discusses NavGuide, which employs ultrasonic sensors to classify obstacles and environmental conditions and uses vibration and audio alerts to provide information to the user. However, NavGuide has limitations, including the inability to detect downhill slopes and the detection of damp floors only after the user steps on them.
Marzec (2019) describes a navigation system that uses IR sensors to detect walls, buildings, and other objects. The system requires the user to hold the device in their arms, and vibrations are used to convey navigation signals about potential movements and nearby threats. Parimal A. Itankar (2016) aimed to enhance the experience of visually impaired people by using weakly supervised learning to match ambient music selected by a deep neural network. They suggested a multifaceted strategy for measuring ambiguous concepts related to music, including availability, implicit senses, immersion, and subjective fitness. The authors conducted in-depth trials involving 70 individuals and collected feedback on the features of their model. However, the investigation had three significant flaws: the performance was not cutting-edge, the music database was limited in terms of genre and size, and each experiment involved only 10 people, making it impossible to extrapolate the findings. To generalize the findings, large-scale experiments are necessary, and improved auditory feature representation through devices is needed to enhance accuracy. Sangpal (2019) describes a system that uses Python and AIML to create an intelligent chatbot assistant that mimics the behavior of a human assistant. The system is designed to respond to queries or issues with spoken word remedies. Python programs are used to convert audio commands to text format and for audio reply and voice recognition, similar to Google text-tospeech. AIML is used to match instructions or text to existing dialogues and conversations using predefined audio syntax. The Python interpreter forms the core of the system. This system represents the state-of-the-art in intelligent chatbot assistants.
Ashraf *2020) describes an IoT-powered smart stick developed by Ayesha Ashraf et al. to assist people with vision impairments. The stick has an ultrasonic sensor and a buzzer for detecting obstacles and sounding an alarm. An Android app is also developed that can send essential notifications and GPS location to saved phone numbers. The device is lightweight and portable, making it easier for people with disabilities to walk around more easily and comfortably without the risk of injury. The authors also explore how image processing and interaction with the aid can help the user understand the structure of obstacles and objects before providing advice from the aid. Overall, the smart stick is designed to enhance the mobility and safety of visually impaired individuals.
Rahman (2021) presents a smart device for visually impaired people (VIPs) that utilizes deep learning and the Internet of Things (IoT) (Kurniawan & Saputra, 2022). The device is divided into three parts: an IoT-based smart stick that monitors the blind person's movement in real-time through the cloud, deep learning algorithms for detecting obstacles, and a virtual assistant to manage the integration. The paper uses the Mask R-CNN model for object detection, which allows for accurate object detection in a short processing time. However, due to the wide range of obstacles in the real world, this model uses a limited number of sensors and devices and a pretrained model in object recognition with a limited number of real-world images. Bhavani (2021) proposes an approach to provide visually impaired individuals with direction finding, directional help, walking path notification, and an understanding of their surroundings. The proposed approach utilizes highly sensitive sensors and a comfortable and flexible carbon material to construct the stick. The study identifies various disabilities and provides an auditory output that the blind can use to understand the awareness of the buzzer's position.
The paper discusses an assistance aid proposed by (Salama, 2019) which uses an ultrasonic sensor to detect obstacles in the path of visually impaired individuals. The sensor measures the height and distance of the obstacle and communicates the information to a microcontroller. The system can sense a distance of up to 12 feet with a resolution of 0.3 cm. However, the paper highlights that such a system may not be suitable for visually impaired individuals as it may be too complex and not meet their requirements for simplicity and ease of use.
The article discusses assistive technology, which refers to tools or equipment that help people with impairments to participate fully in society (Foley, 2012) (Mountain, 2004) (Pentland, 1998). Smart aids are a type of assistive technology that can come in mobile computerized forms, such as mobile phones, and are more covert than conventional assistive technologies, reducing social stigma. Navigation aids for the blind are limited in their ability to detect and alert users to the types of obstacles in front of them, and RFID-based systems are expensive and prone to damage. To address these limitations, the article proposes a navigation system that uses deep learning algorithms and a smartphone to identify various obstacles. This system does not require the deployment of RFID chips and is not limited to particular indoor or outdoor settings, thus expanding the locations where it can be used and providing visually impaired people with more information about their surroundings. Overall, the literature survey includes a variety of proposed solutions for assisting visually impaired individuals, ranging from obstacle detection systems to communication devices. Each reference has its own unique set of advantages and disadvantages, and the metrics used to evaluate them also vary. Some of the common advantages include lightweight and portable design, realtime monitoring, and deep learning algorithms. On the other hand, some of the disadvantages include inability to assess obstacle label and provide auditory navigation, limitations in detecting certain types of obstacles, and lack of an effective prototype with the latest technologies.
Research gaps identified in the literature include limitations in identifying obstacle labels, difficulty detecting objects at certain ranges, and limited dynamic object detection. Several studies propose navigation aids that are not connected to a camera for image processing, which restricts their ability to detect information about crosswalks and traffic signals in the outdoor environment. Many studies involve an only small sample size, which makes it difficult to generalize findings, and the performance of some systems is not cutting-edge. Moreover, the lack of simplicity and ease of use of some devices hinders their usefulness and reliability. To overcome these gaps, future research could focus on developing navigation systems that integrate low-cost, small-volume sensors with deep learning algorithms for accurate and reliable obstacle detection, while providing simple and easy-to-use interfaces for visually impaired individuals. Large-scale experiments could be conducted to test the effectiveness of such systems, and more attention could be paid to developing navigation aids that are suitable for both indoor and outdoor use.

Methodology
The proposed methodology aims to design a lightweight and portable system application that is accessible to visually impaired individuals, promoting ease of use and mobility. Additionally, it involves integrating AI and sensor technology within the smart application to enable real-time obstacle detection and classification, enhancing the user's situational awareness.

Proposed System
The system design includes a wearable device and a voice-based navigation system that assists visually impaired individuals in recognizing, detecting, and avoiding obstacles. The system leverages a combination of computer vision and sensor-based technology to achieve its objectives.
The methodology employed in this research aims to address the objectives of designing a lightweight and portable system application and integrating AI and sensor technology for realtime obstacle detection and classification. The proposed system utilizes a combination of computer vision and sensor-based technology to assist visually impaired individuals in recognizing, detecting, and avoiding obstacles. The methodology includes the following steps: 1. Designing a Wearable Device and Voice-Based Navigation System: The first step in the methodology involves designing a wearable device and developing a voice-based navigation system specifically tailored for visually impaired individuals. The device should be lightweight and portable, ensuring ease of use and mobility. It should be comfortable for the user to wear and provide convenient access to the navigation features. 2. Implementing Computer Vision and Sensor Technology: To enable real-time obstacle detection and classification, the proposed system integrates computer vision and sensor technology. Computer vision algorithms are used to process images captured by the device's camera and detect objects in the environment. Sensor technology, such as ultrasonic sensors or depth sensors, is utilized to measure the proximity and depth of obstacles. 3. Proximity Measurement Method: The proposed model introduces a novel proximity measurement method for estimating the distance of obstacles based on their depth. This strategy overcomes the limitations of the current system by enabling the detection of multiple objects simultaneously, including longer obstacles like doors and walls. The proximity measurement method enhances the user's situational awareness by providing accurate information about the distance of obstacles in real-time. 4. Implementation Tools: The proposed work has been implemented using various tools and technologies. Raspberry Pi, a small and affordable single-board computer, is utilized as the hardware platform for the wearable device. Android Studio, an integrated development environment, is employed to develop the Android app for the navigation system. Google COLAB Tool, a cloud-based platform for machine learning, is utilized for object detection algorithms. 5. Lightweight, Efficient, and Cost-Effective System: The developed system is designed to be lightweight, efficient, and cost-effective. The hardware components are carefully selected to ensure portability and minimize the overall weight of the device. The software algorithms are optimized for efficient processing and real-time performance. By utilizing affordable and readily available hardware and software resources, the system aims to provide a cost-effective solution for visually impaired individuals in need of navigation assistance. User Testing and Evaluation: Once the system is implemented, user testing and evaluation are conducted to assess its performance and usability. Visually impaired individuals participate in the testing process, providing feedback on the system's effectiveness in assisting with navigation and obstacle detection. User feedback is valuable for refining and improving the system's design and functionality. The primary objective of the proposed work is to develop an Intellectual IoT System that can aid in obstacle detection for visually impaired individuals in society. The proposed system is designed to be cost-effective and efficient, making it accessible to a broader range of users.
The system is built using the Tensor Flow and Deep Learning frameworks, enabling visually impaired individuals to perform obstacle acknowledgement and autonomous walking in crowds while navigating different modes of transportation. The system consists of a Raspberry Pi 3, a speaker, and a Pi camera, and it can be placed in a pocket with the camera mounted outside.

Fig. 2. Design Process
Designing a wearable device and a voice-based navigation system involves creating a hardware and software solution that is specifically tailored for visually impaired individuals. The goal is to develop a device that can be easily worn by the user and provide them with intuitive navigation capabilities through voice commands. Here is an explanation of the key aspects involved in the design process as shown in figure 2: 1. Hardware Design: The hardware design focuses on creating a wearable device that is lightweight, portable, and comfortable for the user to wear. The device may include components such as a camera for capturing images, sensors for obstacle detection, a microphone for voice input, and a speaker for voice output. The size, shape, and placement of these components should be carefully considered to ensure convenience and usability. 2. User Interface Design: The user interface design involves creating an intuitive and accessible interface for visually impaired individuals to interact with the device. Since visual cues may not be applicable, the interface primarily relies on voice-based interactions. This includes designing a system for voice commands, feedback, and prompts. The interface should be simple, clear, and easy to navigate, enabling users to control the device and receive information effectively. 3. Voice Recognition and Natural Language Processing: To enable voice-based navigation, the system needs to incorporate voice recognition and natural language processing capabilities. Voice recognition algorithms are employed to accurately interpret and understand the user's voice commands. Natural language processing techniques help in understanding the context and intent of the user's instructions, allowing the system to respond appropriately. 4. Navigation and Routing Algorithms: The design also involves developing navigation and routing algorithms that can guide visually impaired individuals through different environments. These algorithms consider factors such as the user's current location, desired destination, available paths, and obstacle information. By utilizing mapping data and real-time feedback from the obstacle detection system, the device can provide step-by-step directions, alert users about obstacles in their path, and suggest alternative routes when necessary. 5. Integration of Sensor Technology: Sensor technology plays a crucial role in obstacle detection and enhancing situational awareness. Sensors like ultrasonic or depth sensors can be integrated into the wearable device to detect the presence and proximity of obstacles. The data from these sensors is processed and used to provide real-time feedback to the user about the distance and location of obstacles. 6. Accessibility and Ergonomics: Accessibility and ergonomics are critical considerations in the design process. The device should be accessible to individuals with visual impairments, taking into account factors such as tactile feedback, braille labels, and adjustable straps for fitting different body sizes. The device should also be ergonomic, ensuring that it is comfortable and unobtrusive for the user to wear for extended periods. 7. Iterative Design and User Feedback: The design process typically involves multiple iterations and user feedback. Prototype versions of the wearable device and navigation system are tested with visually impaired individuals to gather insights and improve the design. User feedback helps in refining the interface, addressing usability issues, and enhancing the overall user experience. By considering these aspects and leveraging advancements in technology, the design of a wearable device and voice-based navigation system can provide visually impaired individuals with a user-friendly and effective means of navigating their surroundings independently. The proposed work has been implemented using Raspberry Pi, Android Studio, and Google COLAB Tool for object detection. The system is lightweight, efficient, and cost-effective, making it a suitable option for visually impaired individuals who need assistance in navigating their surroundings. The device can also assist regular walkers in detecting obstacles and avoiding accidents.

Voice based navigation System development: Voice Recognition and Natural Language Processing:
Voice recognition algorithms: Let x be the audio input (user's voice command), and let y be the recognized text output. Voice recognition algorithms aim to estimate the most likely text transcription y given the audio input x. This can be represented as: y = VoiceRecognition (x) Natural language processing techniques: Let z be the interpreted intent or context of the user's instructions. Natural language processing techniques involve analyzing the recognized text output y to understand the context and intent of the user's instructions. This can be represented as: z = NaturalLanguageProcessing(y) • The audio input signal, ( ) , is typically represented as a discrete-time sequence of samples, where t is the time index. • Various preprocessing techniques can be applied to enhance the quality of the audio signal.
Let's denote the preprocessed audio signal as ( ) .

Feature Extraction:
• The preprocessed audio signal is transformed into a sequence of feature vectors that capture relevant acoustic characteristics. • Let's denote the feature vector sequence as = { 1 , 2 , … , }, where is the total number of feature vectors. • Each feature vector, , represents a snapshot of the audio characteristics at a specific time frame. 3. Acoustic Modeling: • Acoustic modeling aims to estimate the conditional probability of the observed feature vectors given the underlying phonetic units or subword units. • Let represent the set of possible phonetic units or subword units, and ( | )represent the probability of observing feature vectors given a phonetic unit or subword unit sequence H. • Acoustic modeling techniques, such as Hidden Markov Models (HMMs) or Deep Neural Networks (DNNs), are employed to learn and estimate these probabilities. 4. Language Modeling: • Language modeling focuses on estimating the likelihood of word sequences or phrases in a given language. • Let W represent the set of possible word sequences, and P(W) represent the probability distribution over word sequences. • Language models learn the statistical patterns and context of words to estimate the probability of a particular word sequence. 5. Decoding: • Decoding combines the acoustic and language models to find the most probable word sequence given the observed feature vectors. • The decoding process involves finding the word sequence that maximizes the joint probability ( , | ), where H represents the phonetic or subword unit sequence. • Decoding algorithms, such as Hidden Markov Model Decoding or Beam Search, are employed to find the most probable word sequence. 6. Postprocessing: • Postprocessing techniques are applied to refine the recognized text and improve its accuracy. • These techniques may involve language-specific rules, grammar checks, spell checking, or statistical methods to correct common recognition errors. 7. Output: • The output of the voice recognition algorithm is the recognized text, which represents the transcription of the spoken input. In mathematical terms, the voice recognition algorithm involves estimating conditional probabilities of feature vectors given phonetic or subword units (acoustic modeling), estimating the likelihood of word sequences (language modeling), decoding to find the most probable word sequence, and applying postprocessing techniques. The recognized text is the final output of the algorithm.

Proximity Measurement Method
Novel proximity measurement method for estimating the distance of obstacles based on their depth Algorithm: Novel Proximity Measurement Method Inputs: • Image: Captured image of the environment • Calibration Parameters: Parameters for calibration and scaling Outputs: • Distance: Estimated distance of the closest obstacle Steps: 1. Initialize the system and calibration parameters. 2. Capture an image of the environment using the device's camera.
• Input: Image 3. Apply computer vision algorithms to detect and segment obstacles in the image.
• Input: Image • Output: Segmented Obstacles 4. For each segmented obstacle: a. Extract depth information using sensor technology (e.g., ultrasonic or depth sensors).
• Input: Segmented Obstacle • Output: Depth Information b. Calculate the distance of the obstacle based on the depth information and calibration parameters.
• Input: Depth Information, Calibration Parameters • Output: Distance c. Store the distance information associated with the obstacle. 5. Repeat steps 4a to 4c for all detected obstacles in the image. 6. Analyze the stored distance information to identify the closest obstacle.
• Input: Stored Distance Information • Output: Closest Obstacle 7. Provide real-time feedback to the user about the distance of the closest obstacle.
• Input: Closest Obstacle • Output: Distance 8. Update the system continuously to adapt to changes in the environment and obstacle positions. 9. If there are more images to process, return to step 2; otherwise, proceed to step 10. 10. End the algorithm. Calibration Parameters: Calibration parameters are variables or settings used to adjust and scale the measurements obtained from the sensors and cameras. These parameters help in calibrating the system to ensure accurate distance calculations and obstacle detection. The specific calibration parameters depend on the sensor technology used and the characteristics of the camera or sensors in the system. Examples of calibration parameters may include focal length, lens distortion coefficients, or sensor alignment values.

Distance Calculation Formula:
The distance calculation formula depends on the specific sensor technology used for depth extraction. For example, if ultrasonic sensors are employed, the distance can be estimated using the time-offlight principle: If depth sensors like LiDAR (Light Detection and Ranging) or Time-of-Flight (ToF) cameras are used, the distance can be directly measured by the sensor technology.

Analysis of Stored Distance Information:
• The stored distance information can be analyzed to identify the closest obstacle. • This analysis typically involves comparing the stored distances associated with each obstacle and selecting the obstacle with the shortest distance as the closest one. • By evaluating the stored distance information, the algorithm can determine which obstacle is in close proximity to the user and provide appropriate feedback or warnings. This algorithm takes an input image of the environment and applies computer vision algorithms to detect and segment obstacles. Depth information is extracted using sensor technology, and the distance of each obstacle is calculated based on the depth information and calibration parameters. The algorithm then identifies the closest obstacle and provides real-time feedback to the user about the estimated distance. The system continuously updates to adapt to changes in the environment, allowing for accurate obstacle detection and distance estimation. The algorithm can be repeated for multiple images if needed.

Results And Discussion
The hardware prototype of the proposed model is shown below. of the proposed model, which is a wearable device. The wearable device is designed to be compact and lightweight, allowing visually impaired individuals to comfortably wear it during their daily activities. a wearable device for visually impaired individuals includes the following components: 1. Main Unit: The main unit of the wearable device houses the necessary electronics, including the processing unit, memory, and power source. It may resemble a small device that can be attached to the user's clothing or worn on the body. 2. Camera: The wearable device may feature an integrated camera or a camera module that captures images of the user's surroundings. The camera is an essential component for capturing the visual information required for obstacle detection and navigation. 3. Sensor Technology: The wearable device may incorporate various sensor technologies to enhance its functionality. For example, it may include ultrasonic sensors or depth sensors to detect the presence and proximity of obstacles in the user's environment. 4. User Interface: The wearable device may have a user interface, which could include buttons, touch-sensitive areas, or voice command capabilities. The user interface allows visually impaired individuals to interact with the device, provide input, and receive feedback. 5. Connectivity: The prototype may include wireless connectivity options, such as Bluetooth or Wi-Fi, to enable communication with other devices or to connect to a companion mobile application.

The smart App Home page:
Speak Data is the name of the application. The Home page contains different buttons like connect Bluetooth, view data, speak data, Touch speak, Stop. In the proposed system the Bluetooth device used is "project18". As soon as the Bluetooth is "ON" the mobile app gets a list of wireless devices and the user should connect with the required Bluetooth device by clicking on the "Connect Bluetooth" button on the Home page from the user's mobile app "Speak Data".

Text to speech direction navigation:
When the user connects to the system Bluetooth device, the App automatically gets installed in the user's mobile and when an object is detected through an ultrasonic sensor the app gets the direction data via Bluetooth and it helps to guide the visually impaired people (VIP). If any user wants to check the direct data they can view by clicking the "view data" button in the home page.

Dynamic Object Identification voice prompt:
When the user clicks on the "Speak dl" button on the home page, the app automatically redirected to the mobile camera and starts identifying an object in the surroundings if any. If any object is identified, it gives voice prompt as an output with a number of objects detected, label (name) of the object. The app can also detect multiple objects in one scan. In Fig. 7. The number of ObjectDetected[0] as per the array index i.e label: "teddy bear". In Fig. 12. The no. of ObjectDetected[0] as per the array index i.e label: "cell phone". In Fig. 13. The no. of ObjectDetected[1] as per the array index i.e label: "person and tv".

Static Object Identification:
Clicking on "Touch speak" from the Home page it redirects to static page. When users click on each button it opens the camera and identified a particular static object. To exit from App/to stop getting data the user is provided with the button "Stop" which makes the user terminate from App. In the above working process, the ultrasonic sensor transmits and receives the signals, and the Bluetooth with wireless communication gets connected to the App gives the text the data through which the blind person gets the text to speech as output to guide them.
INPUT OUTPUT Fig. 19. Working process of Algorithms with Input and output In the above working process, the trained phone will be input to the App to get data. As soon as the object is detected with the respective trained label of it, the App gets the text as input and it converts it into the speech as output to let blind people interact with the environment. The figure 20 provided represents the accuracy of obstacle detection for various objects under different lighting conditions, specifically during the day and at night. The accuracy values are presented as percentages, ranging from 0% to 100%. • Daytime Detection Accuracy: This column represents the accuracy of detecting obstacles during the daytime. It indicates the likelihood of correctly identifying an object as the specified obstacle during daylight conditions. • Night Detection Accuracy: This column represents the accuracy of detecting obstacles at night.
It indicates the likelihood of correctly identifying an object as the specified obstacle in lowlight or nighttime conditions. • Object Categories: Each row represents a specific object category, such as "Human," "Two Wheeler," "Couch," "Chair," and so on.

Day time Night
• Accuracy Values: The values in the table indicate the detection accuracy for each object category under the respective lighting conditions. For example, in the "Human" row, the accuracy for daytime detection is given as 0.95 (or 95%), while the accuracy for nighttime detection is 0.76 (or 76%). • Interpretation: Higher accuracy values suggest that the system has a better ability to accurately detect the specified obstacles. For instance, an accuracy value of 0.95 (or 95%) indicates a high likelihood of correctly identifying the obstacle, while a value of 0.76 (or 76%) suggests a relatively lower accuracy. It's important to note that the accuracy values in the table are provided without additional information regarding the specific methodology or dataset used to calculate them. The accuracy of obstacle detection can vary depending on the specific algorithms, training data, and evaluation metrics employed in the system.

Conclusion And Future Work
This paper presents a navigation and obstacle detection system called SMART_EYE, designed to assist visually impaired individuals through a smart application. The proposed system aims to address the challenges faced by visually impaired people in navigating unfamiliar environments. By incorporating AI and sensor technology, SMART_EYE provides real-time assistance in detecting and classifying obstacles, enabling users to navigate independently. The system captures and analyzes images using computer vision algorithms and detects obstacles using ultrasonic sensors. Users receive feedback through voice commands, enhancing their situational awareness. The proposed model offers a cost-effective and efficient solution, providing qualitative and quantitative performance measures to evaluate its impact on visually impaired individuals' lives.
The SMART_EYE system lays the foundation for further advancements and improvements in assisting visually impaired individuals. Future research can focus on the following aspects: Enhanced Object Classification: Improving the accuracy and efficiency of object classification algorithms can enable SMART_EYE to recognize a wider range of objects and provide more detailed information about the environment.