Papers by Paschalis Panteleris
The analysis and the understanding of object manipulation scenarios based on computer vision tech... more The analysis and the understanding of object manipulation scenarios based on computer vision techniques can be greatly facilitated if we can gain access to the full articulation of the manipulating hands and the 3D pose of the manipulated objects. Currently, there exist methods for tracking hands in interaction with objects whose 3D models are known. There are also methods that can reconstruct 3D models of objects that are partially observable in each frame of a sequence. However, to the best of our knowledge, no method can track hands in interaction with unknown objects. In this paper we propose such a method. Experimental results show that hand tracking can be achieved with an accuracy that is comparable to the one obtained by methods that assume knowledge of the object models. Additionally, as a by-product, the proposed method delivers accurate 3D models of the manipulated objects.
Markerless 3D tracking of hands in action or in interaction with objects provides rich informatio... more Markerless 3D tracking of hands in action or in interaction with objects provides rich information that can be used to interpret a number of human activities. In this paper, we review a number of relevant methods we have proposed. All of them focus on hands, objects and their interaction and follow a generative approach. The major strength of such an approach is the straightforward fashion in which arbitrarily complex priors can be easily incorporated towards solving the tracking problem and their capability to generalize to greater and/or different domains. The proposed generative approach is implemented in a single, unified computational framework.
The problems of vision-based detection and tracking of independently moving objects, localization... more The problems of vision-based detection and tracking of independently moving objects, localization and map construction are highly interrelated, in the sense that the solution of any of them provides valuable information to the solution of the others. In this paper, rather than trying to solve each of them in isolation, we propose a method that treats all of them simultaneously. More specifically, given visual input acquired by a moving RGBD camera, the method detects independently moving objects and tracks them in time. Additionally, the method estimates the camera (ego)motion and the motion of the tracked objects in a coordinate system that is attached to the static environment, a map of which is progressively built from scratch. The loose assumptions that the method adopts with respect to the problem parameters make it a valuable component for any robotic platform that moves in a dynamic environment and requires simultaneous tracking of moving objects, egomotion estimation and map construction. The usability of the method is further enhanced by its robustness and its low computational requirements that permit real time execution even on low-end CPUs.
Abstract This paper presents a computer vision system that supports non-instrumented, location-ba... more Abstract This paper presents a computer vision system that supports non-instrumented, location-based interaction of multiple users with digital representations of large-scale artifacts. The proposed system is based on a camera network that observes multiple humans in front of a very large display. The acquired views are used to volumetrically reconstruct and track the humans robustly and in real time, even in crowded scenes and challenging human configurations.
Uploads
Papers by Paschalis Panteleris