Deep reinforcement learning — an algorithmic training technique that drives agents to achieve goals through the use of rewards — has shown great promise in the vision-based navigation domain. Researchers at the University of Colorado recently demonstrated a system that helps robots figure out the direction of hiking trails from camera footage, and scientists at ETH Zurich described in a January paper a machine learning framework that aids four-legged robots in getting up from the ground when they trip and fall.
But might such AI perform just as proficiently when applied to a drone rather than machines planted firmly on the ground? A team at the University of California at Berkeley set out to find out.
In a newly published paper on the preprint server Arxiv (“Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight“), the team proposes a “hybrid” deep reinforcement