Datasets:

Modalities:
Video
Audio
Languages:
English
ArXiv:
License:

Questions about the annotations of video

#1
by XDrunker - opened

Hi, I have read your paper at ArXiv and really appreciate your work! Now I have some questions.

  1. After checking the compressed package metadata.zip, I found only 5 annotation files per video in the format of json, each corresponding to a frame of the video. So do you only annotate certain key frames of the video, rather than the entire video (since you mentioned it in the dataset card)? What determines the choice of keyframes?
  2. Will you open-source the code and weights of the TAVLO model?
Machine Intelligence and Pattern Analysis Lab org

Hi, thank you so much for your interest in our work ☺️

  1. We do not annotate every frame in the video, as doing so would be inefficient and redundant. Instead, we select a few keyframes per video based on audio-based event detection. Specifically, we identify candidate segments where sound events occur and then sample representative frames from those segments. To avoid temporal redundancy, we maintain a minimum interval between frames and ensure that the sampled frames are distributed across the video for diverse event coverage. For more details, please refer to Section 3.1.2 "Automatic Clip and Frame Sampling", particularly the Frame Sampling part of our paper.

  2. Currently, we do not have concrete plans to release the TAVLO model code or pretrained weights, but we appreciate your interest and will keep the community posted if that changes!

hahyeon610 changed discussion status to closed

Sign up or log in to comment