Questions about the annotations of video
Hi, I have read your paper at ArXiv and really appreciate your work! Now I have some questions.
- After checking the compressed package metadata.zip, I found only 5 annotation files per video in the format of json, each corresponding to a frame of the video. So do you only annotate certain key frames of the video, rather than the entire video (since you mentioned it in the dataset card)? What determines the choice of keyframes?
- Will you open-source the code and weights of the TAVLO model?
Hi, thank you so much for your interest in our work ☺️
We do not annotate every frame in the video, as doing so would be inefficient and redundant. Instead, we select a few keyframes per video based on audio-based event detection. Specifically, we identify candidate segments where sound events occur and then sample representative frames from those segments. To avoid temporal redundancy, we maintain a minimum interval between frames and ensure that the sampled frames are distributed across the video for diverse event coverage. For more details, please refer to Section 3.1.2 "Automatic Clip and Frame Sampling", particularly the Frame Sampling part of our paper.
Currently, we do not have concrete plans to release the TAVLO model code or pretrained weights, but we appreciate your interest and will keep the community posted if that changes!