Medhini Narasimhan1,2 Licheng Yu2 Sean Bell2 Ning Zhang2 Trevor Darrell1 |
|
|
|
We introduce VideoTaskformer, a transformer model for learning representations of steps in instructional videos. Prior works learn step representations from single short video clips, independent of the task, thus lacking knowledge of task structure. Our model, VideoTaskformer, learns step representations for masked video steps through the global context of all surrounding steps in the video, making our learned representations aware of task semantics and structure. We use the learned representations to detect mistakes in steps and ordering of steps in new instructional videos. |
Given the enormous number of instructional videos available online, learning a diverse array of multi-step task models from videos is an appealing goal. We introduce a new pre-trained video model, VideoTaskformer, focused on representing the semantics and structure of instructional videos. We pre-train VideoTaskformer using a simple and effective objective: predicting weakly supervised textual labels for steps that are randomly masked out from an instructional video (masked step modeling). Compared to prior work which learns step representations locally, our approach involves learning them globally, leveraging video of the entire surrounding task as context. From these learned representations, we can verify if an unseen video correctly executes a given task, as well as forecast which steps are likely to be taken after a given step. We introduce two new benchmarks for detecting mistakes in instructional videos, to verify if there is an anomalous step and if steps are executed in the right order. We also introduce a long-term forecasting benchmark, where the goal is to predict long-range future steps from a given step. Our method outperforms previous baselines on these tasks, and we believe the tasks will be a valuable way for the community to measure the quality of step representations. Additionally, we evaluate VideoTaskformer on 3 existing benchmarks---procedural activity recognition, step classification, and step forecasting---and demonstrate on each that our method outperforms existing baselines and achieves new state-of-the-art performance. |
|
VideoTaskformer Pre-training (Left). VideoTaskformer fVT learns step representations for the masked out video clip vi, while attending to the other clips in the video. It consists of a video encoder fvid, a step transformer ftrans, and a linear layer fhead, and is trained using weakly supervised step labels. Downstream Tasks (Right). We evaluate step representations learned from VideoTaskformer on 6 downstream tasks. |
|
We show qualitative results of our method on 4 tasks. The step labels are not used during training and are only shown here for illustrative purposes. (A) shows a result on mistake step detection, where our model's input is the sequence of video clips on the left and it correctly predicts the index of the mistake step "2" as the output. In (B), the order of the first two steps is swapped and our model classifies the sequence as incorrectly ordered. In (C), for the long-term forecasting task, the next 5 steps predicted by our model match the ground truth and in (D), for the short-term forecasting task, the model predicts the next step correctly given the past 2 steps. |
M. Narasimhan, L. Yu, S. Bell, N. Zhang, T. Darrell Learning and Verification of Task Structure in Instructional Videos arXiv, 2023. |
|
Acknowledgements |
|