Publication
Title
Visual intention classification by deep learning for gaze-based human-robot interaction
Author
Abstract
In this work, we propose a deep learning model to classify a human’s visual intention in gaze-based Human-Robot Interaction(HRI). We consider a scenario in which a human wears a pair of eye tracking glasses and can select an object by gaze and a robotic manipulator picks up the object. A neural network is trained as a binary classifier to classify if a human is looking at an object. The network architecture is based on Fully Convolutional Net(FCN), Convolutional Block Attention Modules(CBAM) and Residual Blocks. We evaluate our model with two experiments. In one experiment we test the performance in the scenario where only a single object exists and the other one multiple objects exist. The results show that our proposed network is accurate and it can generalize well. The F1 score on the single object is 0.971 and 0.962 on multiple objects.
Language
English
Source (journal)
IFAC-PapersOnLine
Source (book)
3rd IFAC Workshop on Cyber-Physical & Human Systems CPHS 2020
Publication
2020
ISSN
24058963
DOI
10.1016/J.IFACOL.2021.04.168
Volume/pages
53 :5 (2020) , p. 750-755
ISI
000656589700133
Full text (Publisher's DOI)
Full text (open access)
UAntwerpen
Faculty/Department
Research group
Publication type
Subject
Affiliation
Publications with a UAntwerp address
External links
Web of Science
Record
Identifier
Creation 22.06.2021
Last edited 30.10.2024
To cite this reference