We present the “Visual Inductive Priors for Data-Efficient Computer Vision” challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set. The winners of each challenge are invited to present their winning method at the VIPriors workshop presentation at ECCV 2020. The four data-deficient challenges are:

  1. Image classification on ImageNet
  2. Semantic segmentation on Cityscapes
  3. Object detection on MS COCO
  4. Action recognition on UFC-101

These tasks were chosen to encourage researchers of arbitrary background to participate: no giant GPU clusters are needed, nor will training for a long time yield much improvement over the baseline results.

Important dates

  • Challenges open: March 11, 2020
  • Challenges close: July 10, 2020
  • Technical reports due: July 17, 2020
  • Winners announced: July 24, 2020

The challenge has been completed! Please see the final rankings below.


As training data for these challenges we use subsets of publicly available datasets. We do not directly provide the data but instead expose tooling to generate the subsets from the canonical versions of the publicly available full datasets through our toolkit. Please refer to Resources for details.


  • We prohibit the use of other data than the provided training data, i.e., no pre-training, no transfer learning.
  • For submissions on CodaLab to qualify to the challenge we require the authors submit either a technical report or a full paper about their final submission. See details below under “Report”. Submissions without a report or paper associated do not qualify to the competition.
  • Top contenders in the challenge may be required to submit their submissions to peer review to ensure reproducability and that the rules of the challenge were followed. The organizers will contact contenders for this when necessary after the challenges close.
  • Organizers retain the right to disqualify any submissions that violate these rules.
  • The winners of each of the four challenges will get an opportunity to present their method at the VIPriors workshop at ECCV 2020. The organizers will contact contenders that are eligible for this opportunity after the challenges close.


For the submission on CodaLab to qualify for the competition, we require the authors to submit a technical report of at least three pages about the submission. The deadline for these reports is July 17th, the same date as the workshop paper deadline. Authors are to submit their report to ArXiv and submit the link to us using the form linked below. Those unable to submit to Arxiv can email their report to the emailaddress listed under “Organizers”. Please use the same format as for the paper track. After the conference we will publish the links to the technical reports on the workshop website.

Authors that are already submitting a paper about the submission to the workshop paper track are not required to submit a technical report. Instead, they are to use the same submission form to refer the challenge organizers to their submitted paper.

Link to submission form: Google Form.


Each of the four challenges are hosted on CodaLab, a public platform for AI challenges. Submissions must be made by uploading files containing predictions according to the format defined in the toolkit (see Resources for details) to the challenge pages listed below.

Please find the challenges here:


To accommodate submissions to the challenges we provide a toolkit that contains

  • Python tools for generating the appropriate training and validation data;
  • documentation of the required submission format for the challenges;
  • implementations of the baseline models for each challenge.

See the GitHub repository of the toolkit here.


If you have any questions, please first check the Frequently Asked Questions in the toolkit repository. If your question persists, you can ask it on the forums of the specific challenge on the CodaLab website. If you need to ask us a question in private, you can email us at vipriors-ewi AT tudelft DOT nl.

Final rankings

Listed below are the final rankings for each challenge, after reviewing all submissions for which a technical report or paper was received. We congratulate the winners! More details, including presentations of the winning methods and full results of the challenges, are forthcoming in the workshop.

Image Classification

  1. sunpengfei
  2. Tennant
  3. ByeongjoKim
  4. Samsung-SLSI-MSL-SS aka Ben1365

Semantic Segmentation

  1. xmj
  2. Samsung-SLSI-MSL-SS
  3. jesse1029
  4. MrGranddy
  5. rpytel

Object Detection

  1. feishen
  2. Guyz
  3. DeepBlueAI

Action Recognition

  1. ishan.dave
  2. Samjang_masiso
  3. Singularity213
  4. taeohkim