2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Temporal localization of actions in videos has been of increasing interest in recent years. Howev... more Temporal localization of actions in videos has been of increasing interest in recent years. However, most existing approaches rely on complex architectures that are either expensive to train, inefficient at inference time, or require thorough and careful architecture engineering. Classical action recognition on pre-segmented clips, on the other hand, benefits from sophisticated deep architectures that paved the way for highly reliable video clip classifiers. In this paper, we propose to use transfer learning to leverage the good results from action recognition for temporal localization. We apply a network that is inspired by the classical bag-of-words model for transfer learning and show that the resulting framewise class posteriors already provide good results without explicit temporal modeling. Further, we show that combining these features with a deep but simple convolutional network achieves state of the art results on two challenging action localization datasets.
Action recognition is a fundamental problem in computer vision with a lot of potential applicatio... more Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent Con-vNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019
With the increase in visual categories that become more and more fine-granular, maintaining high ... more With the increase in visual categories that become more and more fine-granular, maintaining high accuracy is a challenge. As the visual world can be organized in a semantic hierarchy, which is usually in form of a directed acyclic graph of many levels of abstraction, a classifier should be able to select an appropriate level trading off specificity for accuracy in case of uncertainty. In this work, we study the problem of finding accuracy vs. specificity trade-offs. To this end, we propose a Level Selector network, which selects the class granularity for the class prediction for an image or video, and a self-supervision based training strategy to train the Level Selector network. We show as part of the empirical evaluation, that our approach achieves superior results compared to the current state of the art on large-scale image and video datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018
Since annotating and curating large datasets is very expensive, there is a need to transfer the k... more Since annotating and curating large datasets is very expensive, there is a need to transfer the knowledge from existing annotated datasets to unlabelled data. Data that is relevant for a specific application, however, usually differs from publicly available datasets since it is sampled from a different domain. While domain adaptation methods compensate for such a domain shift, they assume that all categories in the target domain are known and match the categories in the source domain. Since this assumption is violated under real-world conditions, we propose an approach for open set domain adaptation where the target domain contains instances of categories that are not present in the source domain. The proposed approach achieves state-of-the-art results on various datasets for image classification and action recognition. Since the approach can be used for open set and closed set domain adaptation, as well as unsupervised and semi-supervised domain adaptation, it is a versatile tool for many applications.
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Temporal localization of actions in videos has been of increasing interest in recent years. Howev... more Temporal localization of actions in videos has been of increasing interest in recent years. However, most existing approaches rely on complex architectures that are either expensive to train, inefficient at inference time, or require thorough and careful architecture engineering. Classical action recognition on pre-segmented clips, on the other hand, benefits from sophisticated deep architectures that paved the way for highly reliable video clip classifiers. In this paper, we propose to use transfer learning to leverage the good results from action recognition for temporal localization. We apply a network that is inspired by the classical bag-of-words model for transfer learning and show that the resulting framewise class posteriors already provide good results without explicit temporal modeling. Further, we show that combining these features with a deep but simple convolutional network achieves state of the art results on two challenging action localization datasets.
Action recognition is a fundamental problem in computer vision with a lot of potential applicatio... more Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent Con-vNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019
With the increase in visual categories that become more and more fine-granular, maintaining high ... more With the increase in visual categories that become more and more fine-granular, maintaining high accuracy is a challenge. As the visual world can be organized in a semantic hierarchy, which is usually in form of a directed acyclic graph of many levels of abstraction, a classifier should be able to select an appropriate level trading off specificity for accuracy in case of uncertainty. In this work, we study the problem of finding accuracy vs. specificity trade-offs. To this end, we propose a Level Selector network, which selects the class granularity for the class prediction for an image or video, and a self-supervision based training strategy to train the Level Selector network. We show as part of the empirical evaluation, that our approach achieves superior results compared to the current state of the art on large-scale image and video datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018
Since annotating and curating large datasets is very expensive, there is a need to transfer the k... more Since annotating and curating large datasets is very expensive, there is a need to transfer the knowledge from existing annotated datasets to unlabelled data. Data that is relevant for a specific application, however, usually differs from publicly available datasets since it is sampled from a different domain. While domain adaptation methods compensate for such a domain shift, they assume that all categories in the target domain are known and match the categories in the source domain. Since this assumption is violated under real-world conditions, we propose an approach for open set domain adaptation where the target domain contains instances of categories that are not present in the source domain. The proposed approach achieves state-of-the-art results on various datasets for image classification and action recognition. Since the approach can be used for open set and closed set domain adaptation, as well as unsupervised and semi-supervised domain adaptation, it is a versatile tool for many applications.
Uploads
Papers by Ahsan Iqbal