We present the “Visual Inductive Priors for Data-Efficient Computer Vision” challenges. We offer five challenges, where models are to be trained from scratch in a data-deficient setting. The five challenges are:

  1. Image classification;
  2. Object detection;
  3. Instance segmentation, in collaboration with Synergy Sports;
  4. Action recognition;
  5. Re-identification, in collaboration with Synergy Sports.

These tasks were chosen to encourage researchers of arbitrary background to participate: no giant GPU clusters are needed, nor will training for a long time yield much improvement over the baseline results.

In addition to an award for the top submission by performance, this year will see the introduction of an additional jury-based prize for the most interesting submission.

Please note that the deadline for competing has passed. We will announce the winners of the challenges as well as the jury prizes on October 8th.

Synergy Sports

This challenge is co-organized by Synergy Sports. The instance segmentation and re-identification tasks are powered by their data, and they are lending their support in organizing the challenges.

Synergy Sports is changing how sport is organised, played, coached, commercialized, and experienced around the world. We help federations, leagues, clubs, coaches, referees, and players increase their performance on and off the field. Synergy Sports operates in 41 countries. In the US, Synergy serves every single NBA, G League, WNBA, and NCAA Division I basketball team, as well as every MLB team and over 270 NCAA Division I baseball teams. Visit their website for more information.

Final results

The final results of the challenge are listed below. Only challenge entries accompanied by a technical report qualified for the final rankings. In response to concerns raised about the extension of the submission deadline, we have decided to merge the leaderboards at the time of the original deadline and at the time of the extended deadline. This means that all competitors will place at the rank that is highest between the two deadlines, and there are shared ranks. All first through third placed competitors, as well as the winners of the jury prize, will receive a digital signed certificate.

In the interest of the competition, we make available the reports corresponding to the final leaderboards. They can be downloaded in a single ZIP file, available here.

Image classification

  • (1) Pengfei Sun, Xuan Jin, Xin He, Huiming Zhang, Yuan He, Hui Xue. Alibaba Group.
  • (2) Jiahao Wang, Hao Wang, Yifei Chen, Yanbiao Ma, Fang Liu, Licheng Jiao. School of Artificial Intelligence, Xidian University.
  • (2) Yilu Guo, Shicai Yang, Weijie Chen, Liang Ma, Di Xie, Shiliang Pu. Hikvision Research Institute.
  • (3) Tan Wang, Wanqi Yin, Jiaxin Qi, Jin Liu, Jayashree Karlekar, Hanwang Zhang. Nanyang Technological University & Panasonic R&D Center Singapore.
  • (4) Björn Barz, Lorenzo Brigato, Luca Iocchi, Joachim Denzler. Friedrich Schiller University Jena & Sapienza University of Rome.
  • (5) Xinran Song, Chang Liu, Wenxin He. Xidian University.

Jury prize: Tan Wang, Wanqi Yin, Jiaxin Qi, Jin Liu, Jayashree Karlekar, Hanwang Zhang. Nanyang Technological University & Panasonic R&D Center Singapore.

Object detection

  • (1) Xiaoqiang Lu, Guojin Cao, Xinyu Liu, Zixiao Zhang, Yuting Yang. School of Artificial Intelligence, Xidian University.
  • (1) Huiming Zhang, Xuan Jin, Pengfei Sun, Yuan He, Hui Xue. Alibaba Group.
  • (2) Junhao Niu, Yu Gu, Luyao Nie, Chao You. Xidian University.
  • (2) Linfeng Luo, Yanhong Liu, Fengming Cao. Pingan International Smart City.

Jury prize: Zhang Yuqi. Pingan International Smart City.

Instance segmentation

  • (1) Jahongir Yunusov, Shohruh Rakhmatov, Abdulaziz Namozov, Abdulaziz Gaybulayev, Tae-Hyong Kim. Department of Computer Engineering, Kumoh National Institute of Technology.
  • (2) Bo Yan, Fengliang Qi, Leilei Cao, Hongbin Wang. Ant Group.
  • (3) Pengyu Chen, Wanhua Li, Jiwen Lu. Department of Automation, Tsinghua University & Beijing University of Posts and Telecommunications.
  • (4) Zhenhong Chen, Ximin Zheng.

Jury prize: Jahongir Yunusov, Shohruh Rakhmatov, Abdulaziz Namozov, Abdulaziz Gaybulayev, Tae-Hyong Kim. Department of Computer Engineering, Kumoh National Institute of Technology.

Action recognition

  • (1) Jie Wu, Yuxi Ren, Xuefeng Xiao. ByteDance Inc.
  • (1) Ishan Dave, Naman Biyani, Brandon Clark, Rohit Gupta,Yogesh Rawat, Mubarak Shah. Center for Research in Computer Vision (CRCV), University of Central Florida & Indian Institute of Technology.
  • (2) Zihan Gao, Tianzhi Ma, Jiaxuan Zhao, Licheng Jiao, Fang Liu. Xidian University.

Jury prize: Ishan Dave, Naman Biyani, Brandon Clark, Rohit Gupta,Yogesh Rawat, Mubarak Shah. Center for Research in Computer Vision (CRCV), University of Central Florida & Indian Institute of Technology.

Re-identification

  • (1) Cen Liu, Yunbo Peng, Yue Lin. NetEase Games AI Lab.
  • (2) Siyu Chen, Dengjie Li, Lishuai Gao, Fan Liang, Wei Zhang, Lin Ma. Fudan University & Meituan.
  • (3) Fengliang Qi, Bo Yan, Leilei Cao, Hongbin Wang. Ant Group.
  • (4) XiMin Zheng, JiaQi Yang.

Jury prize: Siyu Chen, Dengjie Li, Lishuai Gao, Fan Liang, Wei Zhang, Lin Ma. Fudan University & Meituan.

Important dates

  • Challenges open: July 5, 2021;
  • Challenges close: September 24, 2021;
  • Technical reports due: October 1, 2021;
  • Winners announced: October 8, 2021.

Rules

  • We prohibit the use of other data than the provided training data, i.e., no pre-training, no transfer learning.
  • For submissions on CodaLab to qualify to the challenge we require the authors submit either a technical report or a full paper about their final submission. See details below under “Report”. Submissions without a report or paper associated do not qualify to the competition.
  • Top contenders in the challenge may be required to submit their submissions to peer review to ensure reproducability and that the rules of the challenge were followed. The organizers will contact contenders for this when necessary after the challenges close.
  • Organizers retain the right to disqualify any submissions that violate these rules.

Report

For the submission on CodaLab to qualify for the competition, we require the authors to submit a technical report of at least three pages about the submission. The deadline for these reports is October 1st. Authors are to submit their report to ArXiv and submit the link to vipriors-ewi AT tudelft DOT nl. Those unable to submit to Arxiv can email their report directly to vipriors-ewi AT tudelft DOT nl. After the conference we will publish the links to the technical reports on the workshop website.

Submission

Each of the five challenges are hosted on CodaLab, a public platform for AI challenges. Submissions must be made by uploading files containing predictions according to the format defined in the toolkit (see Resources for details) to the challenge pages listed below.

Please find the challenges here:

Resources

To accommodate submissions to the challenges we provide a toolkit that contains

  • Python tools for generating the appropriate training and validation data;
  • documentation of the required submission format for the challenges;
  • implementations of the baseline models for each challenge.

See the GitHub repository of the toolkit here.

Questions

If you have any questions, please first check the Frequently Asked Questions in the toolkit repository. If your question persists, you can ask it on the forums of the specific challenge on the CodaLab website. If you need to ask us a question in private, you can email us at vipriors-ewi AT tudelft DOT nl. –>