My research interest lies at the intersection of machine learning, computer vision, and computational geometry approaches and their application to localization, mapping, motion planning, and state estimation for intelligent autonomous systems like autonomous vehicles and mobile robots. Specifically, my current research focuses on developing vision-based localization and mapping techniques that are robust to the variations in the scene caused by illumination and weather. 

Visual Localization and Mapping

Visual localization enables autonomous vehicles and robots to navigate based on visual observations of their operating environment. In visual localization, the agent estimates its pose based on the image from the camera. The operating environment of the agent can undergo various changes due to illumination, day and night, seasons, structural changes, and so on. In vision-based localization, it is important to adapt to these changes that can significantly impact visual perception. We investigate developing methods that enable autonomous agents to robustly localize despite these changes in the surroundings. For example, we developed a visual place recognition system that aids the autonomous agent in identifying its location on a large-scale map by retrieving a reference image that matches closely with the query image from the camera. The proposed method utilizes concise descriptors from the image so that the image process can be done rapidly with less memory consumption. 

LLM-based Robot Task Allocation

Large Language Models (LLMs) like ChatGPT are advanced artificial intelligence systems capable of understanding and generating human-like text across a wide range of topics. Leveraging the power of LLMs, we implemented a task allocation system for heterogeneous robot teams to generate task-allocated Pythonic scripts. The system allocates tasks based on the robot's skills and capabilities. 

UAV for Inspection

Unmanned Aerial Vehicles (UAVs) for inspection utilize aerial capabilities to perform thorough and efficient inspections of infrastructure, pipelines, or facilities, providing high-resolution data and real-time insights without the need for human intervention in challenging or hazardous environments. We develop motion planning strategies for inspecting structures based on a demonstration by an expert pilot. This lets us inspect certain parts of the structure alone as demonstrated by the expert and scale the same trajectory to numerous structures of the same kind.   

Delivery Robots

Delivery robots are autonomous vehicles designed to transport goods from one location to another, streamlining last-mile logistics and offering a futuristic solution for efficient and contactless delivery services. We developed a drone-based delivery system that can deliver packages at the door of the recipients as compared delivering packages on marker or in the backyards. 

Cloud Robotics

Cloud robotics involves the integration of robotic systems with cloud computing resources, enabling robots to access and leverage data, computation, and machine learning capabilities from the cloud to enhance their performance, adaptability, and collaborative functionalities. We developed an off-loading method that uses reinforcement learning to predict the computational intensity of a task and decide whether a task should be off-loaded to the cloud or computed locally. 

Multi-Human Multi-Robot Teaming

Multi-human Multi-Robot Teaming refers to a collaborative approach where multiple human operators and multiple robotic agents work together synergistically to achieve complex tasks. This involves seamless communication and monitoring of the human and robot states based on which the tasks can be allocated. We built a dynamic system that allocates tasks to robots and humans based on the human cognitive state and robot conditions.  

USV Sediment Sampling

Unmanned Surface Vehicles (USVs) are increasingly employed for sediment sampling, providing an efficient and cost-effective means to collect environmental data in aquatic ecosystems. These autonomous vessels navigate water surfaces autonomously, enabling systematic and precise sediment sampling for environmental monitoring, ecological research, and sedimentary analysis without the need for human presence or intervention. We developed an autonomous USV capable of navigating autonomously and collecting sediment samples using a Van Veen Grab sampler. 

Geometric Segmentation and Feature Extraction

Geometric segmentation of a CAD (Computer-Aided Design) model involves partitioning the overall structure into smaller, distinct parts based on geometric characteristics. This process identifies and separates components within the model, facilitating a more detailed analysis, modification, or assembly. We devised a segmentation model and also an encoder that can encode the segments for the matching and retrieval of similar CAD models. 

Geometric Reconstruction

Geometric surface reconstruction is the computational process of creating a three-dimensional representation of an object or environment from scattered data points, typically obtained through 3D scanning or imaging, to accurately recreate its physical structure. This technique is essential in fields such as computer vision, robotics, and virtual reality for precise modeling and analysis. I worked on the geometric reconstruction of 2D and 3D point clouds. A Deluany Triangulation-based method was used for the reconstruction. The methods focused on being robust to noise and outliers which are very common in real-world data.