Saving data by adding visual knowledge priors to Deep Learning

placeholder image 2

Call for papers

We invite researchers to submit their recent work on data-efficient computer vision.

Camera-ready revisions: September 10th.

Read More

placeholder image 2

Present a poster

We invite researchers to present their recent papers on data-efficient computer vision.

Deadline has passed.

Read More

VIPriors challenges

Including data-efficient action recognition, classification, detection and segmentation.

Final rankings are out now.

Read More

Workshop now fully online

Update August 3rd: we have moved forward the deadline for the camera-ready versions of accepted papers to September 10th. Submission instructions have been sent to the authors through OpenReview.

Update July 29th:

  • Notifications for the paper track have gone out to authors. The final papers and their pre-recorded talks will be published through the ECCV conference platform on August 11th August 16th.
  • The deadline for submitting posters has passed. We are in the process of reviewing these submissions. The accepted posters will be made available through this website.
  • The revised program has been published. The workshop live sessions will be hosted at 8:00 and 18:00 UTC+1, with identical programming. Papers (oral & poster) will be presented by playing the pre-recorded talk available on the conference platform, followed by live Q&A.
  • The keynote talks will be made available either through the conference platform or through this website (TBD). Attendees of the workshop are invited to prepare their attendance after August 11th August 16th by watching the keynote talks and/or checking out the accepted papers.

Update June 26th: Following the main conferences decision to move fully online our workshop will also be fully online. Keynotes, orals and posters will go through as planned, though in an online form. Deadlines will not change, except for the deadline for submitting a poster which has been pulled forward by two weeks to August 2nd July 29th to accomodate uploading the required materials to the conference website in time.

About the workshop

This workshop focuses on how to pre-wire deep networks with generic visual inductive innate knowledge structures, which allows to incorporate hard won existing generic knowledge from physics such as light reflection or geometry. Visual inductive priors are data efficient: What is built-in no longer has to be learned, saving valuable training data.

Data is fueling deep learning. Data is costly to gather and expensive to annotate. Training on massive datasets has a huge energy consumption adding to our carbon footprint. In addition, there are only a select few deep learning behemoths which have billions of data points and thousands of expensive deep learning hardware GPUs at their disposal. This workshop aims beyond the few very large companies to the long tail of smaller companies and universities with smaller datasets and smaller hardware clusters. We focus on data efficiency through visual inductive priors.

Excellent recent research investigates data efficiency in deep networks by exploiting other data sources such as unsupervised learning, re-using existing datasets, or synthesizing artificial training data. Not enough attention is given on how to overcome the data dependency by adding prior knowledge to deep nets. As a consequence, all knowledge has to be (re-)learned implicitly from data, making deep networks hard to understand black boxes which are susceptible to dataset bias requiring huge data and compute resources. This workshop aims to remedy this gap by investigating how to flexibly pre-wire deep networks with generic visual innate knowledge structures, which allows to incorporate hard won existing knowledge from physics such as light reflection or geometry.

The great power of deep neural networks is their incredible flexibility to learn. The direct consequence of such power, is that small datasets can simply be memorized and the network will likely not generalize to unseen data. Regularization aims to prevent such over-fitting by adding constraints to the learning process. Much work is done on regularization of internal network properties and architectures. In this workshop we focus on regularization methods based on innate priors. There is strong evidence that an innate prior benefits deep nets: Adding convolution to deep networks yields a convolutional deep neural network (CNN) which is hugely successful and has permeated the entire field. While convolution was initially applied on images, it is now generalized to graph networks, speech, language, 3D data, video, etc. Convolution models translation invariance in images: an object may occur anywhere in the image, and thus instead of learning parameters at each location in the image, convolution allows to only consider local relations, yet, share parameters over all image locations, and thus saving a huge number of parameters to learn, allowing a strong reduction in the number of examples to learn from. This workshop aims to further the great success of convolution, exploiting innate regularizing structures yielding a significant reduction of training data.

Workshop program

Our program will include a panel discussion with keynote speakers dr. Matthias Bethge and prof. Charles Leek, oral presentations by challenge winners and selected papers accepted to the workshop, as well as poster presentations for accepted submissions and other recent relevant works. The keynote talks will be made available before the workshop (TBA).

Time (UTC+1)    
8:00 / 18:00 Keynotes Panel discussion + Q&A
8:40 / 18:40 Break  
8:45 / 18:45 Oral session Oral presentations
9:10 / 19:10   Q&A
9:25 / 19:25 Challenges Introduction & awards
9:30 / 19:30   Challenge winners presentations
9:35 / 19:35 Posters Poster presentations
9:45 / 19:45   Q&A
9:50 / 19:50   External poster presentations
9:55 / 19:55 Closing  

Call for papers

We solicit submission that in the broad sense focus on data efficiency through visual inductive priors covering but not limited to the following topics:

  • Improving data efficiency of Deep Computer Vision methods using prior knowledge about the task domain
  • Analysis on the properties of Deep Learning representations as they relate to visual inductive priors
  • Transformation-equivariant image representations, e.g. scale-equivariance, rotation-equivariance, etc.
  • Color invariants/constants in Deep Learning
  • Object persistence between video frames
  • Shape-based representations for Deep Learning
  • Texture/shape bias in Convolutional Neural Networks
  • Alternative compact filter bases for Deep Learning
  • Capsule Networks

Important dates

  • Submissions open: March 1, 2020
  • Submission deadline: July 17, 2020
  • Notification of acceptance: July 31, 2020
  • Presentation materials deadline: August 11, 2020 August 14, 2020
  • Camera-ready deadline: September 15, 2020 September 10, 2020
  • Workshop: August 23, 2020

Submission guidelines

  • Submissions must be entered in OpenReview: link.
  • Submissions must follow the ECCV 2020 submission format.
  • Optional supplementary material can be submitted through OpenReview (single .zip file, maximum of 50MB). The deadline for supplementary material is the same as the paper submission deadline, i.e. July 17, 2020. Reviewers are strongly encouraged but not required to review the supplementary material.
  • Reviewing will be according to double-blind format.
  • Accepted papers will be published in ECCV 2020 Workshop proceedings.
  • Authors of accepted papers will be invited to present their work as a poster presentation at the workshop. Authors of a selection of papers will be invited to present their work orally at the workshop.

Posters

Authors of recent and relevant works (including works published at the main ECCV 2020 conference paper track) are invited to present a poster of their work at our workshop. Please contact the organizers if you would like to present your work at our workshop.

Important dates

  • Deadline: July 28th, 2020

Note that this deadline has been pulled forward by two weeks to August 2nd to July 28th to accomodate uploading the required materials to the conference website in time. Organizers will contact accepted posters about hosting their posters.

VIPriors Challenges

We present the “Visual Inductive Priors for Data-Efficient Computer Vision” challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set.

Please see the challenges page for submission instructions and deadlines.

Invited speakers

dr. Matthias Bethge

dr. Matthias Bethge

Bethge Lab

Website

prof. Charles Leek

prof. Charles Leek

University of Liverpool

Website

Organizers

dr. Jan van Gemert

dr. Jan van Gemert

Delft University of Technology

Website

dr. Anton van den Hengel

dr. Anton van den Hengel

University of Adelaide

Website

Attila Lengyel

Attila Lengyel

Delft University of Technology

Robert-Jan Bruintjes

Robert-Jan Bruintjes

Delft University of Technology

Website

Osman Semih Kayhan

Osman Semih Kayhan

Delft University of Technology

Marcos Baptista Ríos

Marcos Baptista Ríos

University of Alcala

Contact

Email us at vipriors-ewi AT tudelft DOT nl