Category Archives: My research

Visual Navigation for Autonomous Flying Robots

Introduction
In recent years, flying robots such as autonomous quadcopters have gained increased interest in robotics and computer vision research. For navigating safely, these robots need the ability to localize themselves autonomously using their onboard sensors. Potential applications of such systems include the automatic 3D reconstruction of buildings, inspection and simple maintenance tasks, surveillance of public places as well as in search and rescue systems.
In this project, I am trying to study and apply the current techniques for 3D localization, mapping and navigation that are suitable for quadcopters and try to come up with newer and better algorithms than the existing ones. This project will be based on the following topics:
Necessary background on robot hardware, sensors, 3D transformations
Motion estimation from images (including interest point detection, feature descriptors, robust estimation, visual odometry, iteratively closest point)
Filtering techniques and data fusion
Non-linear minimization, bundle adjustment, place recognition, 3D reconstruction
Autonomous navigation, path planning, exploration of unknown environments

Deliverables:
By the end of my Hons. Project following deliverables will be developed:
SLAM module: Following are the SLAM module which will be made:
Large-Scale Direct Monocular SLAM(LSD SLAM): In the first phase of my project, I will be implementing LSD SLAM in OpenCV by integrating the lsd_slam_core. After the core library is implemented, openFABMap package will be used for detecting loop-closures. Finally for map visualization, PCL(Point Cloud Library) would be integrated with the module.

Dense Visual SLAM for RGB-D Cameras: After the Mid evaluation, I will do the SLAM implementation for RGB-D cameras. An entropy-based similarity measure for keyframe selection and loop closure detection will be included. The calib3d module in OpenCV will be used for camera calibration, 3D reconstruction and for finding the camera intrinsics when porting the dvo_slam library to OpenCV.

Visual odometry module: Once the SLAM system is built, semi-dense and dense visual odometry for monocular and RGB-D cameras respectively will be made. Config APIs will be developed so as to configure the data-rate and precision of the local x, y, z coordinates. Finally, the local visual odometry will be fused with the global odometry estimates from GPS(latt, long). Altimeter will be used for fusing z coordinate.

Tracking module: The idea is to develop a robust optical flow tracker which runs in real time using a downwards camera. Currently, to compute the planar motion of a quadrocopter, users are restricted to ADNS family optical flow sensors which are generally used in mouse sensors. With this module in OpenCV, users will be able to use any generic camera module to compute the coarse motion vectors and in turn compute the 2D velocity for the quadrocopter with much greater accuracy and efficiency. Once the planar velocities are computed, we can use it for stabilizing the quadrocopter while hovering.

Navigation module: This module is essentially to make quadcopters robust by eliminating time delays because of the latency of the sensor readings, computational processing, send-receiving etc. This module will be an OpenCV implementation of the tum_ardrone ROS package developed for robust state estimation for Parrot AR Drone. Using the Monocular SLAM system based on PTAM(Parallel Tracking and Mapping), we rotate the visual map such that the xy-plane corresponds to the horizontal plane according to the accelerometer data, and scale it such that the average keypoint depth is 1. Next, we use the pose estimates from EKF(Extended Kalman Filter) to identify and reject falsely tracked frames. Finally, to steer the quadcopter to a desired location, PID control is used.

Obstacle avoidance module: In this approach, collision avoidance, traditionally considered a high level planning problem, can be effectively distributed between different levels of control, allowing real-time robot operations in a complex environment. We reformulated the manipulator control problem as direct control of manipulator motion in operational space-the space in which the task is originally described-rather than as control of the task’s corresponding joint space motion obtained only after geometric and kinematic transformation. Using visual sensing, real-time collision avoidance demonstrations on moving obstacles have been performed.

Hons. Project Timeline

Jan 23 – Jan 30 Week-27
Achieved: LSD SLAM implementation(Part-1)
Description: Developing an IO Wrapper for lsd_slam_core. implementation in OpenCV. The core library is an implementation of the motion estimation for the quadrocopter.

Feb 1 – Feb 6 Week-28
Achieved: LSD SLAM implementation(Part-2)
Description: Once the core library is successfully integrated, openFABMAP will be integrated to the system for detecting loop-closures. For Graph optimization problem g2o framework will be used after integrating it with OpenCV.

Feb 7 – Feb 14 Week-29
Achieved: Developing the lsd_slam_visualization submodule
Description:PCL(Point Cloud Library) will be used to represent 3D visualization for the map generated by the SLAM system. It must be noted that since a quadrocopter has limited computational resources in terms of memory and processing speed a library must be used which requires minimal amount of onboard resources.

Feb 15 – Feb 29 Week-30,31
Achieved: Unit testing LSD-SLAM
Description: LSD-SLAM will be tested on both publicly available benchmark sequences like the one developed by University of Freiburg, as well as live using the monocular camera onboard the quadrocopter. The functional aspects of the SLAM module will be documented in doxygen.

March 1 – March 7 Week-32
Achieved: Dense Visual SLAM for RGB-D camera
Description: With the freenect and openni library as an IO wrapper, I will be integrating dvo_core library with OpenCV. Once the core library is ported to OpenCV, visualization module is developed using the PCL visualizer to represent the dense 3D map.

March 8 – March 15 Week-33
Achieved: Visual odometry and Sensor model integration
Description: After implementing all the SLAM systems, sensor model from the IMU is integrated.

March 16 – March 23 Week-34
Achieved: Configuring Dense optical flow
Description: Dense optical flow will be used to implement a motion sensing module on the Raspberry pi camera. calcOpticalFlowSF() API will be used.

March 24 – March 31 Week-35
Achieved: Unit Testing for visual odometry and tracking module
Description: Respective modules will be tested with the standard benchmarks available. For the visual odometry from monocular camera, KITTI dataset would be used and for RGB-D camera dense visual odometry ICL-NUIM dataset and TUM RGB-D dataset would be used.

April 1 – April 7 Week-36
Achieved: Navigation module
Description: Major task would be to develop a state-estimation module which includes an OpenCV implementation of PTAM. Then the Linear Kalman filter class in OpenCV must be modified to Extended Kalman filter to incorporate the data parameters from the IMU sensor. Finally learned 3D feature visual Map will be made using the OpenGL library.

April 8 – April 15 Week-37
Achieved: Unit Testing for Navigation module
Description: The navigation module will be tested by letting the quadrocopter complete large variety of different figures like Rectangle, Haus vom Nikolaus. Documentation will be written on the usage of these APIs.

April 16 – April 23 Week-38
Achieved: Generating OpenCV python bindings and integration testing.
Description: Experimental quadrocopter whose specifications are mentioned in the first page will be used for testing purposes. The generated python APIs would be us in the Multiwii control program. Any bugs pertaining will be addressed and usage documentation will be committed.