Visual Navigation for Autonomous Flying Robots

Introduction
In recent years, flying robots such as autonomous quadcopters have gained increased interest in robotics and computer vision research. For navigating safely, these robots need the ability to localize themselves autonomously using their onboard sensors. Potential applications of such systems include the automatic 3D reconstruction of buildings, inspection and simple maintenance tasks, surveillance of public places as well as in search and rescue systems.
In this project, I am trying to study and apply the current techniques for 3D localization, mapping and navigation that are suitable for quadcopters and try to come up with newer and better algorithms than the existing ones. This project will be based on the following topics:
Necessary background on robot hardware, sensors, 3D transformations
Motion estimation from images (including interest point detection, feature descriptors, robust estimation, visual odometry, iteratively closest point)
Filtering techniques and data fusion
Non-linear minimization, bundle adjustment, place recognition, 3D reconstruction
Autonomous navigation, path planning, exploration of unknown environments

Deliverables:
By the end of my Hons. Project following deliverables will be developed:
SLAM module: Following are the SLAM module which will be made:
Large-Scale Direct Monocular SLAM(LSD SLAM): In the first phase of my project, I will be implementing LSD SLAM in OpenCV by integrating the lsd_slam_core. After the core library is implemented, openFABMap package will be used for detecting loop-closures. Finally for map visualization, PCL(Point Cloud Library) would be integrated with the module.

Dense Visual SLAM for RGB-D Cameras: After the Mid evaluation, I will do the SLAM implementation for RGB-D cameras. An entropy-based similarity measure for keyframe selection and loop closure detection will be included. The calib3d module in OpenCV will be used for camera calibration, 3D reconstruction and for finding the camera intrinsics when porting the dvo_slam library to OpenCV.

Visual odometry module: Once the SLAM system is built, semi-dense and dense visual odometry for monocular and RGB-D cameras respectively will be made. Config APIs will be developed so as to configure the data-rate and precision of the local x, y, z coordinates. Finally, the local visual odometry will be fused with the global odometry estimates from GPS(latt, long). Altimeter will be used for fusing z coordinate.

Tracking module: The idea is to develop a robust optical flow tracker which runs in real time using a downwards camera. Currently, to compute the planar motion of a quadrocopter, users are restricted to ADNS family optical flow sensors which are generally used in mouse sensors. With this module in OpenCV, users will be able to use any generic camera module to compute the coarse motion vectors and in turn compute the 2D velocity for the quadrocopter with much greater accuracy and efficiency. Once the planar velocities are computed, we can use it for stabilizing the quadrocopter while hovering.

Navigation module: This module is essentially to make quadcopters robust by eliminating time delays because of the latency of the sensor readings, computational processing, send-receiving etc. This module will be an OpenCV implementation of the tum_ardrone ROS package developed for robust state estimation for Parrot AR Drone. Using the Monocular SLAM system based on PTAM(Parallel Tracking and Mapping), we rotate the visual map such that the xy-plane corresponds to the horizontal plane according to the accelerometer data, and scale it such that the average keypoint depth is 1. Next, we use the pose estimates from EKF(Extended Kalman Filter) to identify and reject falsely tracked frames. Finally, to steer the quadcopter to a desired location, PID control is used.

Obstacle avoidance module: In this approach, collision avoidance, traditionally considered a high level planning problem, can be effectively distributed between different levels of control, allowing real-time robot operations in a complex environment. We reformulated the manipulator control problem as direct control of manipulator motion in operational space-the space in which the task is originally described-rather than as control of the task’s corresponding joint space motion obtained only after geometric and kinematic transformation. Using visual sensing, real-time collision avoidance demonstrations on moving obstacles have been performed.

Leave a Reply

Your email address will not be published. Required fields are marked *