Skip to main content

Research Repository

Advanced Search

Deep active learning for autonomous navigation.

Hussein, Ahmed; Gaber, Mohamed Medhat; Elyan, Eyad

Authors

Ahmed Hussein

Mohamed Medhat Gaber



Contributors

Chrisina Jayne
Editor

Lazaros Iliadis
Editor

Abstract

Imitation learning refers to an agent's ability to mimic a desired behavior by learning from observations. A major challenge facing learning from demonstrations is to represent the demonstrations in a manner that is adequate for learning and efficient for real time decisions. Creating feature representations is especially challenging when extracted from high dimensional visual data. In this paper, we present a method for imitation learning from raw visual data. The proposed method is applied to a popular imitation learning domain that is relevant to a variety of real life applications; namely navigation. To create a training set, a teacher uses an optimal policy to perform a navigation task, and the actions taken are recorded along with visual footage from the first person perspective. Features are automatically extracted and used to learn a policy that mimics the teacher via a deep convolutional neural network. A trained agent can then predict an action to perform based on the scene it finds itself in. This method is generic, and the network is trained without knowledge of the task, targets or environment in which it is acting. Another common challenge in imitation learning is generalizing a policy over unseen situation in training data. To address this challenge, the learned policy is subsequently improved by employing active learning. While the agent is executing a task, it can query the teacher for the correct action to take in situations where it has low confidence. The active samples are added to the training set and used to update the initial policy. The proposed approach is demonstrated on 4 different tasks in a 3D simulated environment. The experiments show that an agent can effectively perform imitation learning from raw visual data for navigation tasks and that active learning can significantly improve the initial policy using a small number of samples. The simulated test bed facilitates reproduction of these results and comparison with other approaches.

Citation

HUSSEIN, A., GABER, M.M. and ELYAN, E. 2016. Deep active learning for autonomous navigation. In Jayne, C. and Iliadis, L. (eds.) Engineering applications of neural networks: proceedings of the 17th International engineering applications of neural networks conference (EANN 2016), 2-5 September 2016, Aberdeen, UK. Communications in computer and information science, 629. Cham: Springer [online], pages 3-17. Available from: https://doi.org/10.1007/978-3-319-44188-7_1

Conference Name 17th International engineering applications of neural networks conference (EANN 2016)
Conference Location Aberdeen, UK
Start Date Sep 2, 2016
End Date Sep 5, 2016
Acceptance Date Jun 5, 2016
Online Publication Date Aug 19, 2016
Publication Date Sep 30, 2016
Deposit Date Jun 6, 2017
Publicly Available Date Jun 6, 2017
Print ISSN 1865-0929
Publisher Springer
Volume 629
Pages 3-17
Series Title Communications in computer and information science
Series Number 629
Series ISSN 1865-0929
ISBN 9783319441870
DOI https://doi.org/10.1007/978-3-319-44188-7_1
Keywords Imitation learning; Robots; Optimal policy; Visual data
Public URL http://hdl.handle.net/10059/2361

Files




You might also like



Downloadable Citations