Skip to content

NyleSiddiqui/MultiView_Actions

Repository files navigation

PWC PWC PWC PWC

DVANet: Disentangling View and Action Features for Multi-View Action Recognition [AAAI 2024]

Nyle Siddiqui, Praveen Tirupattur, Mubarak Shah

Abstract: In this work, we present a novel approach to multi-view action recognition where we guide learned action representations to be separated from view-relevant information in a video. When trying to classify action instances captured from multiple viewpoints, there is a higher degree of difficulty due to the difference in background, occlusion, and visibility of the captured action from different camera angles. To tackle the various problems introduced in multi-view action recognition, we propose a novel configuration of learnable transformer decoder queries, in conjunction with two supervised contrastive losses, to enforce the learning of action features that are robust to shifts in viewpoints. Our disentangled feature learning occurs in two stages: the transformer decoder uses separate queries to separately learn action and view information, which are then further disentangled using our two contrastive losses. We show that our model and method of training significantly outperforms all other uni-modal models on four multi-view action recognition datasets: NTU RGB+D, NTU RGB+D 120, PKU-MMD, and N-UCLA. Compared to previous RGB works, we see maximal improvements of 1.5%, 4.8%, 2.2%, and 4.8% on each dataset, respectively.

Usage Instructions

  1. Edit configuration.py and choose the correct csv for training/testing (CS=Cross-Subject, CV=Cross-View)
  2. Run the main script, edit the --dataset flag to train/test on the different datasets referenced in the paper.

Example:

sbatch script.slurm

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •