YouMakeup VQA Challenge: Towards Fine-grained Action Understanding in Domain-Specific Videos

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Qin Jin, Weiying Wang, Shizhe Chen, Ludan Ruan, Linli Yao
Journal/Conference Name arXiv preprint
Paper Category
Paper Abstract The goal of the YouMakeup VQA Challenge 2020 is to provide a common benchmark for fine-grained action understanding in domain-specific videos e.g. makeup instructional videos. We propose two novel question-answering tasks to evaluate models' fine-grained action understanding abilities. The first task is \textbf{Facial Image Ordering}, which aims to understand visual effects of different actions expressed in natural language to the facial object. The second task is \textbf{Step Ordering}, which aims to measure cross-modal semantic alignments between untrimmed videos and multi-sentence texts. In this paper, we present the challenge guidelines, the dataset used, and performances of baseline models on the two proposed tasks. The baseline codes and models are released at \url{https//github.com/AIM3-RUC/YouMakeup_Baseline}.
Date of publication 2020
Code Programming Language Python
Comment

Copyright Researcher 2022