Prompt Learning for Action Recognition

Xijun Wang*, Ruiqi Xian*, Tianrui Guan, Dinesh Manocha

Published in arxiv, 2023

Abstract

We present a new general learning approach for action recognition, Prompt Learning for Action Recognition (PLAR), which leverages the strengths of prompt learning to guide the learning process. Our approach is designed to predict the action label by helping the models focus on the descriptions or instructions associated with actions in the input videos. Our formulation uses various prompts, including optical flow, large vision models, and learnable prompts to improve the recognition performance. Moreover, we propose a learnable prompt method that learns to dynamically generate prompts from a pool of prompt experts under different inputs. By sharing the same objective, our proposed PLAR can optimize prompts that guide the model's predictions while explicitly learning input-invariant (prompt experts pool) and input-specific (data-dependent) prompt knowledge. We evaluate our approach on datasets consisting of both ground camera videos and aerial videos, and scenes with single-agent and multi-agent actions. In practice, we observe a 3.17-10.2% accuracy improvement on the aerial multi-agent dataset, Okutamam and 0.8-2.6% improvement on the ground camera single-agent dataset, Something Something V2.


The paper is available here. Please cite our work if you found it useful,

@misc{wang2023prompt,
      title={Prompt Learning for Action Recognition}, 
      author={Xijun Wang and Ruiqi Xian and Tianrui Guan and Dinesh Manocha},
      year={2023},
      eprint={2305.12437},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}