Visual SLAM: Mapping & Localization of Mobile Robots

Utilizing monocular cameras and ROS for robust mapping and precise localization.

Robot

Project Information

Project Overview

This project focuses on implementing V-slam algorothm with monocular usb camera. Project relays on Keyframe extraction as well differentials in features among adjacent features with Bundle Adjustment using the Stella VSLAM algorithm and ROS2. The Computational Offloading had been featured implementing Robot end node as well as computer end node while Raspberry pi as well as arduino was used for robot control and Ros2 implementations.

Explanation

V-SLAM (Visual Simultaneous Localization and Mapping) is a technique that allows a robot to build a map of its environment while simultaneously localizing itself within that map. This project utilized a monocular camera to capture visual data, which was processed in real-time using the Stella VSLAM algorithm integrated with ROS2. The algorithm works by feature extraction of keyframes and change in keyframes in adjustant images coupled with Bundle adjustjment for mapping via single Monocular camera.

VSLAM
Robot-vslam Implementation
The chosen Visual SLAM algorithm, Stella Vslam, is employed to enable the robot to autonomously navigate and understand its environment. The algorithm extracts distinctive visual features from the monocular camera images, tracks them across consecutive frames, and concurrently constructs a map of the surroundings. Simultaneously, it estimates the robot's pose in real-time, providing accurate localization information. This approach is particularly advantageous for mobile robots operating in diverse and dynamic environments where traditional mapping and localization methods might be limited.

Methodology

Working Overview
Working of Mobile Robot

The project followed these key steps:

  • Configured a four-wheeled differential drive robot equipped with a monocular USB camera.
  • Integrated the Stella VSLAM algorithm with ROS to process visual data and extract features.
  • Implemented motion estimation and map generation through real-time feature matching and pose calculation.
  • Calibrated the camera for accurate depth perception and reduced distortion.
  • Mapping
    Camera Callibration technique

    Results

    Features
    Features Extraction
    Localization Results
    Mapping & Localization

    The robot successfully mapped indoor environments with significant accuracy. The Localization had been performed in mapped environment as well as capable of performing SLAM taskes with single camera. The system demonstrated minimal drift and robust performance in dynamic conditions, such as changing lighting and obstacles.

    Key Findings

  • Monocular cameras provide a cost-effective alternative to expensive LIDAR sensors for SLAM applications.
  • The Stella VSLAM algorithm demonstrated high robustness in both textured and texture-less environments.
  • ROS integration allowed efficient processing and visualization using RViz.
  • Conclusion

    This V-SLAM implementation proved effective for autonomous mobile robots, offering a cost-efficient alternative to traditional LIDAR-based systems. Future improvements includes

  • integrating stereo cameras for depth perception
  • Sensor Fusion such as IMUS as well as GPS