Landslide Susceptibility Mapping through Weightages Derived from Statistical Information Value Model
An Efficient Foot Ulcer Determination System for Diabetic Patients
Statistical Wavelet based Adaptive Noise Filtering Technique for MRI Modality
Real Time Sign Language: A Review
Remote Sensing Schemes Mingled with Information and Communication Technologies (ICTS) for Flood Disaster Management
FPGA Implementation of Shearlet Transform Based Invisible Image Watermarking Algorithm
A Comprehensive Study on Different Pattern Recognition Techniques
User Authentication and Identification Using NeuralNetwork
Flexible Generalized Mixture Model Cluster Analysis withElliptically-Contoured Distributions
Efficient Detection of Suspected areas in Mammographic Breast Cancer Images
Recent advancements in medical imaging technology, such as integrating InceptionV3 algorithms with MRI scans, have revolutionized brain tumor detection. These algorithms leverage deep learning to analyze MRI images rapidly and accurately, aiding in the precise identification of potential tumors. This integration enhances the efficiency of radiologists, enabling timely interventions and improving patient outcomes. The seamless synergy between MRI technology and deep learning algorithms marks a significant leap forward in neurology, promising more personalized and effective care for patients with brain tumors. Ongoing innovation in medical imaging and AI holds great potential for further improving diagnostic accuracy and treatment effectiveness in the future.
Hand gesture recognition has become increasingly popular as a practical means of enhancing human-computer interaction, especially with its widespread adoption in gaming devices like Xbox and PS4, as well as in other devices such as laptops and smartphones. This technology finds applications in various fields, including accessibility support, crisis management, and medicine. Typically, these systems employ Java and a comprehensive database, showcasing various evolutionary methods and precise descriptions, along with the output and tests conducted to refine the software artifact. The proposed system is at the intersection of machine learning and image processing, utilizing different APIs and tools to streamline processes and enhance customization. It aims to be developed using OpenCV and Python, leveraging the former for image processing and the latter for machine learning. This results in a lighter system with less complex code and a reduced database footprint. This approach enables the system to run efficiently even on mini computers without compromising user experience, leading to a cost-effective solution capable of handling various tasks.
One of the key components of computer vision applications like satellite and remote sensing and medical diagnosis is multi-modal image fusion. There are various multi-modal image fusion techniques, and each has advantages and disadvantages of its own. This paper proposes a new method based on multi-scale guided filtering. Initially, each source image is divided into coarse and fine layers at various scales using a guided filter. In order to fuse coarse and fine layers, two different saliency maps are used: an energy saliency map to coarse layers and a modified spatial frequency energy saliency map to fine levels. According to the simulation results, the suggested technique performs better in terms of quantitative evaluations of quality than other state-of-the-art techniques. All the simulation results are carried on a standard brain atlas database.
This paper presents the design and development of a mobile application, built using Flutter, that leverages object detection to enhance the lives of visually impaired individuals. The application addresses a crucial challenge faced by this community: the lack of real-time information about their surroundings. A solution is proposed that utilizes pre-trained machine learning models, potentially through TensorFlow Lite for on-device processing, to identify objects within the user's field of view as captured by the smartphone camera. The application goes beyond simple object recognition; detected objects are translated into natural language descriptions through text-to-speech functionality, providing crucial auditory cues about the environment. This real-time information stream empowers users to navigate their surroundings with greater confidence and independence. Accessibility is a core principle of this paper. The user interface will be designed for compatibility with screen readers, ensuring seamless interaction for users who rely on assistive technologies. Haptic feedback mechanisms will be incorporated to provide non-visual cues and enhance the user experience. The ultimate goal of this paper is to create a user-friendly and informative application that empowers visually impaired individuals to gain greater independence in their daily lives. The application has the potential to improve spatial awareness, foster a sense of security, and promote overall inclusion within society.
Wood defect detection is a critical aspect of quality control in the woodworking industry. This work introduces Deep Wood Inspect, a pioneering system that leverages the capabilities of deep learning for the precise identification and classification of defects in wooden materials. The proposed methodology utilizes densely connected Convolutional Neural Networks (CNNs), specifically DenseNet, to analyze high-resolution images of wood surfaces, providing an automated and efficient solution for defect detection. By integrating advanced image processing techniques with machine learning algorithms, Deep Wood Inspect not only enhances the accuracy of defect identification but also accelerates the inspection process, reducing manual labor and minimizing human error. The system's adaptability to various types of wood and defect categories further contributes to its robustness, making it a valuable tool for both large- scale manufacturers and smaller woodworking enterprises seeking to uphold high standards of quality control.