We present the “Visual Inductive Priors for Data-Efficient Computer Vision” challenges. We offer four challenges, where models are to be trained from scratch in a data-deficient setting. The five challenges are:

  1. Image classification;
  2. Object detection;
  3. Instance segmentation, in collaboration with Synergy Sports;
  4. Action recognition.

These tasks were chosen to encourage researchers of arbitrary background to participate: no giant GPU clusters are needed, nor will training for a long time yield much improvement over the baseline results.

In addition to an award for the top submission by performance, this year will see the introduction of an additional jury-based prize for the most interesting submission.

Synergy Sports

Our VIPriors challenges are co-organized by Synergy Sports. The instance segmentation task is powered by their data, and they are lending their support in organizing the challenges.

Synergy Sports is changing how sport is organised, played, coached, commercialized, and experienced around the world. We help federations, leagues, clubs, coaches, referees, and players increase their performance on and off the field. Synergy Sports operates in 41 countries. In the US, Synergy serves every single NBA, G League, WNBA, and NCAA Division I basketball team, as well as every MLB team and over 270 NCAA Division I baseball teams. Visit their website for more information.

Final results

The final results of the challenge are listed below. Only challenge entries accompanied by a technical report qualified for the final rankings. All first through third placed competitors, as well as the winners of the jury prize, will receive a digital signed certificate. Congratulations to all winners!

Image Classification

  1. Tianzhi Ma, Zihan Gao, Wenxin He, Licheng Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China
  2. Xiaoqiang Lu, Zhongjian Huang, Yuting Yang, Chenghui Li, Chao Li. School of Artificial Intelligence, Xidian University, Xi’an, China
  3. Yi Zuo, Zitao Wang, Xiaowen Zhang, Licheng Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China
  4. Jiahao Wang, Hao Wang, Hua Yang, Fang liu, Lichang Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China
  5. Wenxuan She, Mengjia Wang, Zixiao Zhang, Fang Liu, Licheng Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China

Jury prize: Xiaoqiang Lu, Chao Li, Chenghui Li, Xiao Tan, Zhongjian Huang, Yuting Yang. School of Artificial Intelligence, Xidian University, Xi’an, China

Object Detection

  1. Xiaoqiang Lu, Yuting Yang, Zhongjian Huang, Xiao Tan, Chenghui Li. School of Artificial Intelligence, Xidian University, Xi’an, China
  2. Bocheng Xu , Rui Zhang, and Yanyi Feng. Department of AI R&D, Terminus Technologies, China
  3. Jiawei Zhao, Zhaolin Cui, Xuede Li, Xingyue Chen, Junfeng Luo, Xiaolin Wei. Vision Intelligence Department (VID), Meituan
  4. Ping Zhao, Xinyan Zhang, Weijian Sun, and Xin Zhang. Huawei Technologies Co., Ltd., China & Tongji University, Shanghai, China
  5. Shijie Xiao, Wenyuan Qiu, Zhongyang Huang. OmniVision Technologies Singapore Pte Ltd
  6. Jiwoo Lee, Seungbum Hong, Jihyun Lee, Hyeongyu Chi, and SeulGi Hong. VisionAI, hutom, Seoul, Republic of Korea
  7. Yiqing Xu, Yu Liu, Min Gao

Jury prize: Ping Zhao, Xinyan Zhang, Weijian Sun, Xin Zhang. Huawei Technologies Co., Ltd., China & Tongji University, Shanghai, China

Instance Segmentation

  1. Bo Yan, Xingran Zhao, Yadong Li, Hongbin Wang. Ant Group, China
  2. Shared second place:
    • Fuxing Leng, Jinghua Yan, Peibin Chen, Chenglong Yi. ByteDance, Huazhong University of Science and Technology
    • Xiaoqiang Lu, Yuting Yang, Zhongjian Huang. School of Artificial Intelligence, Xidian University, Xi’an, China
  3. Junpei Zhang, Kexin Zhang, Rui Peng, Yanbiao Ma, Licheng Jiao Fang Liu. Team Yanbiao_Ma
  4. Yi Cheng, ShuHan Wang, Yifei Chen, Zhongjian Huang. School of Artificial Intelligence, Xidian University, Xi’an, China
  5. Tianheng Cheng, Xinggang Wang, Shaoyu Chen, Qian Zhang, Chang Huang, Zhaoxiang Zhang, Wenqiang Zhang, Wenyu Liu. School of EIC, Huazhong University of Science & Technology & Horizon Robotics & Institute of Automation, Chinese Academy of Sciences (CASIA)

Jury prize: Tianheng Cheng, Xinggang Wang, Shaoyu Chen, Qian Zhang, Chang Huang, Zhaoxiang Zhang, Wenqiang Zhang, Wenyu Liu. School of EIC, Huazhong University of Science & Technology & Horizon Robotics & Institute of Automation, Chinese Academy of Sciences (CASIA)

Action recognition

  1. Xinran Song, Chengyuan Yang, Chang Liu, Yang Liu, Fang Liu, Licheng Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China
  2. Wenxin He, Zihan Gao, Tianzhi Ma , Licheng Jiao. School of Artificial Intelligence, Xidian University, Xi’an, China
  3. Bo Tan, Yang Xiao, Wenzheng Zeng, Xingyu Tong, Zhiguo Cao, Joey Tianyi Zhou. Huazhong University of Science and Technology (China) and CFAR (Singapore)

Jury prize: Bo Tan, Yang Xiao, Wenzheng Zeng, Xingyu Tong, Zhiguo Cao, Joey Tianyi Zhou. Huazhong University of Science and Technology (China) and CFAR (Singapore)

Important dates

  • Challenges open: June 1st, 2022;
  • Challenges close: September 1st, 2022;
  • Technical reports due: September 9th, 2022;
  • Winners announced: Live session @ ECCV, October 24th, 2022.

Rules

  • We prohibit the use of other data than the provided training data, i.e., no pre-training, no transfer learning.
  • For submissions on CodaLab to qualify to the challenge we require the authors submit either a technical report or a full paper about their final submission. See details below under “Report”. Submissions without a report or paper associated do not qualify to the competition.
  • Top contenders in the challenge may be required to submit their submissions to peer review to ensure reproducability and that the rules of the challenge were followed. The organizers will contact contenders for this when necessary after the challenges close.
  • Organizers retain the right to disqualify any submissions that violate these rules.

Report

For the submission on CodaLab to qualify for the competition, we require the authors to submit a technical report of at least three pages about the submission. The deadline for these reports is September 9th. Authors are to submit their report to ArXiv and submit the link to vipriors-ewi AT tudelft DOT nl. Those unable to submit to Arxiv can email their report directly to vipriors-ewi AT tudelft DOT nl. After the conference we will publish the links to the technical reports on the workshop website.

Submission

Each of the four challenges are hosted on CodaLab, a public platform for AI challenges. Submissions must be made by uploading files containing predictions according to the format defined in the toolkit (see Resources for details) to the challenge pages listed below.

Please find the challenges here:

Resources

To accommodate submissions to the challenges we provide a toolkit that contains

  • Python tools for generating the appropriate training and validation data;
  • documentation of the required submission format for the challenges;
  • implementations of the baseline models for each challenge.

See the GitHub repository of the toolkit here.

Questions

If you have any questions, please first check the Frequently Asked Questions in the toolkit repository. If your question persists, you can ask it on the forums of the specific challenge on the CodaLab website. If you need to ask us a question in private, you can email us at vipriors-ewi AT tudelft DOT nl.