Publication
Title
Directed real-world learned exploration
Author
Abstract
Automated Guided Vehicles (AGV) are omnipresent, and are able to carry out various kind of preprogrammed tasks. Unfortunately, a lot of manual configuration is still required in order to make these systems operational, and configuration needs to be re-done when the environment or task is changed. As an alternative to current inflexible methods, we employ a learning based method in order to perform directed exploration of a previously unseen environment. Instead of relying on handcrafted heuristic representations, the agent learns its own environmental representation through its embodiment. Our method offers loose coupling between the Reinforcement Learning (RL) agent, which is trained in simulation, and a separate, on real-world images trained task module. The uncertainty of the task module is used to direct the exploration behavior. As an example, we use a warehouse inventory task, and we show how directed exploration can improve the task performance through active data collection. We also propose a novel environment representation to efficiently tackle the sim2real gap in both sensing and actuation. We empirically evaluate the approach both in simulated environments and a real-world warehouse.
Language
English
Source (book)
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1-5 October, 2023, Detroit, USA
Publication
IEEE , 2023
ISBN
978-1-6654-9190-7
DOI
10.1109/IROS55552.2023.10341504
Volume/pages
p. 5227-5234
Full text (Publisher's DOI)
Full text (publisher's version - intranet only)
UAntwerpen
Faculty/Department
Research group
Publication type
Subject
Affiliation
Publications with a UAntwerp address
External links
Record
Identifier
Creation 14.12.2023
Last edited 19.12.2023
To cite this reference