Publication
Title
Allocentric pose estimation
Author
Abstract
The task of object pose estimation has been a challenge since the early days of computer vision. To estimate the pose (or viewpoint) of an object, people have mostly looked at object intrinsic features, such as shape or appearance. Surprisingly, informative features provided by other, external elements in the scene, have so far mostly been ignored. At the same time, contextual cues have been shown to be of great benefit for related tasks such as object detection or action recognition. In this paper, we explore how information from other objects in the scene can be exploited for pose estimation. In particular, we look at object configurations. We show that, starting from noisy object detections and pose estimates, exploiting the estimated pose and location of other objects in the scene can help to estimate the objects' poses more accurately. We explore both a camera-centered as well as an object-centered representation for relations. Experiments on the challenging KITTI dataset show that object configurations can indeed be used as a complementary cue to appearance-based pose estimation. In addition, object-centered relational representations can also assist object detection.
Language
English
Source (journal)
Proceedings. - [Los Alamitos, Calif.]
Source (book)
IEEE International Conference on Computer Vision (ICCV), DEC 01-08, 2013, Sydney, AUSTRALIA
Publication
New york : Ieee , 2013
ISBN
978-1-4799-2839-2
DOI
10.1109/ICCV.2013.43
Volume/pages
(2013) , p. 289-296
ISI
000351830500037
Full text (Publisher's DOI)
UAntwerpen
Publication type
Subject
External links
Web of Science
Record
Identifier
Creation 23.10.2019
Last edited 28.10.2024
To cite this reference