Learning to navigate through abstraction and adaptation
Artificial Intelligence (AI) has seen tremendous successes in the past few years. These breakthroughs have, however, mainly been situated in the areas of Computer Vision (CV) and Natural Language Processing (NLP). Breakthroughs in these areas have been fuelled by the abundance of large internet datasets containing huge amounts of nicely labeled examples. Unfortunately, this dataset approach is not very well suited for other types of tasks such as navigation tasks (e.g., navigating towards a set of coordinates, or searching for an object in an unknown environment). Navigation systems need to interact with often noisy environments which are often hard to model in terms of clean input/output labels. ​ ​ Reinforcement Learning (RL) offers an alternative learning paradigm, which allows an AI system to obtain its own dataset through direct interaction with the environment. Unfortunately, RL is still plagued with its own set of problems. One major limitation of RL is its sample inefficiency. Currently, an RL-based approach needs large amounts of interactions with its environment in order to learn a satisfying behaviour. This makes it impractical to utilise RL in real-world environments, and most often requires RL practitioners to train agents in simulated versions of the environment. It is however in most cases not straightforward how to utilise agents trained in simulation in the real world. In this thesis, we propose a number of novel approaches which are able to increase the sample efficiency of RL approaches. This is done through working on multiple levels of abstraction, and through adapting prior related behaviours.
Antwerp : University of Antwerp, Faculty of Science, Department of Computer Science , 2023
225 p.
Supervisor: Latré, Steven [Supervisor]
Supervisor: Mets, Kevin [Supervisor]
Supervisor: De Schepper, Tom [Supervisor]
Full text (open access)
Research group
Publication type
Publications with a UAntwerp address
External links
Creation 28.09.2023
Last edited 13.10.2023
To cite this reference