Saving data by adding visual knowledge priors to Deep Learning

placeholder image 2

Call for papers

We invite researchers to submit their recent work on data-efficient computer vision.

Deadline: July 17th

Read More

placeholder image 2

Present a poster

We invite researchers to present their ECCV 2020 papers on data-efficient computer vision.

Deadline: August 2nd

Read More

VIPriors challenges

Including data-efficient action recognition, classification, detection and segmentation.

Deadline: July 10th

Read More

Workshop now fully online

Following the main conferences decision to move fully online our workshop will also be fully online. We will announced a revised program soon. Keynotes, orals and posters will go through as planned, though in an online form. Deadlines will not change, except for the deadline for submitting a poster which has been pulled forward by two weeks to August 2nd to accomodate uploading the required materials to the conference website in time.

Please stay tuned for a full update on the new form of our workshop.

About the workshop

This workshop focuses on how to pre-wire deep networks with generic visual inductive innate knowledge structures, which allows to incorporate hard won existing generic knowledge from physics such as light reflection or geometry. Visual inductive priors are data efficient: What is built-in no longer has to be learned, saving valuable training data.

Data is fueling deep learning. Data is costly to gather and expensive to annotate. Training on massive datasets has a huge energy consumption adding to our carbon footprint. In addition, there are only a select few deep learning behemoths which have billions of data points and thousands of expensive deep learning hardware GPUs at their disposal. This workshop aims beyond the few very large companies to the long tail of smaller companies and universities with smaller datasets and smaller hardware clusters. We focus on data efficiency through visual inductive priors.

Excellent recent research investigates data efficiency in deep networks by exploiting other data sources such as unsupervised learning, re-using existing datasets, or synthesizing artificial training data. Not enough attention is given on how to overcome the data dependency by adding prior knowledge to deep nets. As a consequence, all knowledge has to be (re-)learned implicitly from data, making deep networks hard to understand black boxes which are susceptible to dataset bias requiring huge data and compute resources. This workshop aims to remedy this gap by investigating how to flexibly pre-wire deep networks with generic visual innate knowledge structures, which allows to incorporate hard won existing knowledge from physics such as light reflection or geometry.

The great power of deep neural networks is their incredible flexibility to learn. The direct consequence of such power, is that small datasets can simply be memorized and the network will likely not generalize to unseen data. Regularization aims to prevent such over-fitting by adding constraints to the learning process. Much work is done on regularization of internal network properties and architectures. In this workshop we focus on regularization methods based on innate priors. There is strong evidence that an innate prior benefits deep nets: Adding convolution to deep networks yields a convolutional deep neural network (CNN) which is hugely successful and has permeated the entire field. While convolution was initially applied on images, it is now generalized to graph networks, speech, language, 3D data, video, etc. Convolution models translation invariance in images: an object may occur anywhere in the image, and thus instead of learning parameters at each location in the image, convolution allows to only consider local relations, yet, share parameters over all image locations, and thus saving a huge number of parameters to learn, allowing a strong reduction in the number of examples to learn from. This workshop aims to further the great success of convolution, exploiting innate regularizing structures yielding a significant reduction of training data.

Workshop program

Our program will include keynotes by dr. Matthias Bethge, prof. Charles Leek and others (TBA), oral presentations by challenge winners and selected papers accepted to the workshop, as well as poster presentations for accepted submissions and other recent relevant works.

The revised online workshop program will be made available as soon as possible.

Call for papers

We solicit submission that in the broad sense focus on data efficiency through visual inductive priors covering but not limited to the following topics:

  • Improving data efficiency of Deep Computer Vision methods using prior knowledge about the task domain
  • Analysis on the properties of Deep Learning representations as they relate to visual inductive priors
  • Transformation-equivariant image representations, e.g. scale-equivariance, rotation-equivariance, etc.
  • Color invariants/constants in Deep Learning
  • Object persistence between video frames
  • Shape-based representations for Deep Learning
  • Texture/shape bias in Convolutional Neural Networks
  • Alternative compact filter bases for Deep Learning
  • Capsule Networks

Important dates

  • Submissions open: March 1, 2020
  • Submission deadline: July 17, 2020
  • Notification of acceptance: July 31, 2020
  • Camera-ready deadline: September 15, 2020
  • Workshop: August 23, 2020

Submission guidelines

  • Submissions must be entered in OpenReview: link.
  • Submissions must follow the ECCV 2020 submission format.
  • Optional supplementary material can be submitted through OpenReview (single .zip file, maximum of 50MB). The deadline for supplementary material is the same as the paper submission deadline, i.e. July 17, 2020. Reviewers are strongly encouraged but not required to review the supplementary material.
  • Reviewing will be according to double-blind format.
  • Accepted papers will be published in ECCV 2020 Workshop proceedings.
  • Authors of accepted papers will be invited to present their work as a poster presentation at the workshop. Authors of a selection of papers will be invited to present their work orally at the workshop.

Posters

Authors of recent and relevant works (including works published at the main ECCV 2020 conference paper track) are invited to present a poster of their work at our workshop. Please contact the organizers if you would like to present your work at our workshop.

Important dates

  • Deadline: August 2, 2020

Note that this deadline has been pulled forward by two weeks to August 2nd to accomodate uploading the required materials to the conference website in time. Organizers will contact accepted posters about hosting their posters.

VIPriors Challenges

We present the “Visual Inductive Priors for Data-Efficient Computer Vision’’ challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set.

Please see the challenges page for submission instructions and deadlines.

Invited speakers

dr. Matthias Bethge

dr. Matthias Bethge

Bethge Lab

Website

prof. Charles Leek

prof. Charles Leek

University of Liverpool

Website

Organizers

dr. Jan van Gemert

dr. Jan van Gemert

Delft University of Technology

Website

dr. Anton van den Hengel

dr. Anton van den Hengel

University of Adelaide

Website

Attila Lengyel

Attila Lengyel

Delft University of Technology

Robert-Jan Bruintjes

Robert-Jan Bruintjes

Delft University of Technology

Website

Osman Semih Kayhan

Osman Semih Kayhan

Delft University of Technology

Marcos Baptista Ríos

Marcos Baptista Ríos

University of Alcala

Contact

Email us at vipriors-ewi AT tudelft DOT nl