CLIP-It! Language-Guided Video Summarization
Medhini Narasimhan    Anna Rohrbach    Trevor Darrell   
UC Berkeley
[Paper]
[Bibtex]
[GitHub]

We introduce CLIP-It, a language-guided multimodal transformer for generic and query-focused video summarization. The figure shows results from our method. Given a day-long video of a national park tour, the generic summary (top) is a video with relevant and diverse keyframes. When using the query “All the scenes containing restaurants and shopping centers”, the generated query-focused summary includes all the matching scenes. Similarly, the query “All water bodies such as lakes, rivers, and waterfalls”, yields a short summary containing all the water bodies present in the video.

Abstract

A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and query-focused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.


Method

Overview of CLIP-It. Given an input video, CLIP-It generates a summary conditioned on either a user-defined natural language query or an automatically generated dense video caption. The Language-Guided Attention head fuses the image and language embeddings, and the Frame-Scoring Transformer jointly attends to all frames to predict their relevance scores. During inference, the video summary is constructed by converting frame scores to shot scores and using Knapsack algorithm to select high scoring shots.


Qualitative Results



Paper

M. Narasimhan, A. Rohrbach, T. Darrell.
CLIP-It! Language-Guided
Video Summarization

Neurips, 2021.

[Paper] | [Bibtex]



Acknowledgements

We thank Arun Mallya for very helpful discussions and feedback. We'd also like to thank Huijuan Xu for feedback on the draft. This work was supported in part by DoD including DARPA’s XAI, LwLL, and SemaFor programs, as well as BAIR’s industrial alliance programs.

Template cloned from here!