top of page

Aerial Robot Estimation and Control with Onboard Sensors

Many recent works on aggressive flight maneuvers highly rely on the presence of precise external tracking systems. This makes them both very expensive and limits their flexibility greatly as they cannot be deployed in unstructured environments such as natural disaster sides.Thus, we focus on the use of cameras and on-board processing to be both independent from transmission limitations to ground stations and the aforementioned tracking systems. Yet, we had to find efficient ways to cope with the restricted on-board computational power.  

RGB-D Based Autonomous Velocity Control

​

In the development of this platform have not taken any particular assumption, equipping the quadrotor with an RGB-D sensor. The extraction of velocity measurements relies on the integration of DVO (Dense Visual Odometry) software from TUM, which is in general able to compute an estimate of the position of a camera in the space without a full mapping of the environment. This information, although affected by an unavoidable cumulative error (because the position is not observable), can be geometrically derived, filtered and fused with IMU information in order to compute a reliable estimate of the velocity. 
 
In addition, the RGB-D sensor is used to compute a local map of the obstacles in the surroundings of the robot and employ obstacle avoidance techniques to improve safety and autonomy of the system. The resulting platform relies only on on-board sensors without any particular assumption on the environment, and is suitable both for autonomous navigation and teleoperation. 
 
Since our current setting employs an ASUS Xtion Live Pro as RGB-D sensor, the platform is currently suitable only for indoor environment. However, we are planning to move to outdoor application by the integration of a more conventional stereo camera and a GPS. 

 

Obstacle Detection, Tracking and Avoidance

​

The depth map from the RGB-D sensor can be efficiently used to track the obstacles in the nearby of the robot using multi-target tracking techniques. In particular, the use of a robot-centered bin-occupancy filter on a limited domain surrounding the obstacle allows the employment of obstacle avoidance in all the direction of motion of the robot, and not only in the field of view of the sensor. Additionally, this comes with a constant computational time which does not grow over time, and exploit the benefits of a full probabilistic approach. 

References

​

[1] M. Odelga, P. Stegagno, and H. H. Bülthoff, Obstacle Detection, Tracking and Avoidance for a Teleoperated UAV, 2016 IEEE Int. Conf. on Robotics and Automation, Stockholm, Sweden, May 2016. (download)​

​​

[2] M. Odelga, P. Stegagno, H. H. Bülthoff and A. Ahmad, A Setup for Multi-UAV Hardware-in-the-Loop Simulations, 3rd RED-UAS 2015: Workshop on Research, Education and Development of Unmanned Aerial Systems, Cancun, Mexico, Nov. 2015. (download)​

​

[3] P. Stegagno, M. Basile, H. H. Bülthoff and A. Franchi, A Semi-autonomous UAV Platform for Indoor Remote Operation with Visual and Haptic Feedback, 2014 IEEE Int. Conf. on Robotics and Automation, pp. 3862-3869, Hong Kong, China, June 2014. (download)​​

​​

[4] P. Stegagno, M. Basile, H. H. Bülthoff and A. Franchi, Vision-based Autonomous Control of a Quadrotor UAV using an Onboard RGB-D Camera and its Application to Haptic Teleoperation, 2nd RED-UAS 2013: Workshop on Research, Education and Development of Unmanned Aerial Systems, Compiegne, France, Nov. 2013. (download)​​

​

However, existing visual systems mostly make either use of prebuilt maps and known environments or build a map as the robot moves around using simultaneous tracking and mapping (SLAM) approaches. Usually, these methods fail once tracking is lost for even short periods of time. 
 
In order to avoid the problem we propose a robust velocity control, which requires only the estimation of the velocity instead of the estimation of the full 3D configuration. Depending on the onboard sensors and the underlying assumptions, different strategies can be applied to estimate the velocity. 

​

bottom of page