Saving data by adding visual knowledge priors to Deep Learning

placeholder image 2

Call for papers

We invite researchers to submit their recent work on data-efficient computer vision.

Deadline: July 10th

Read More

placeholder image 2

Present a poster

We invite researchers to present their ECCV 2020 papers on data-efficient computer vision.

Deadline: August 16th

Read More

VIPriors challenges

Including data-efficient action recognition, classification, detection and segmentation.

Deadline: July 3rd

Read More

About the workshop

This workshop focuses on how to pre-wire deep networks with generic visual inductive innate knowledge structures, which allows to incorporate hard won existing generic knowledge from physics such as light reflection or geometry. Visual inductive priors are data efficient: What is built-in no longer has to be learned, saving valuable training data.

Data is fueling deep learning. Data is costly to gather and expensive to annotate. Training on massive datasets has a huge energy consumption adding to our carbon footprint. In addition, there are only a select few deep learning behemoths which have billions of data points and thousands of expensive deep learning hardware GPUs at their disposal. This workshop aims beyond the few very large companies to the long tail of smaller companies and universities with smaller datasets and smaller hardware clusters. We focus on data efficiency through visual inductive priors.

Excellent recent research investigates data efficiency in deep networks by exploiting other data sources such as unsupervised learning, re-using existing datasets, or synthesizing artificial training data. Not enough attention is given on how to overcome the data dependency by adding prior knowledge to deep nets. As a consequence, all knowledge has to be (re-)learned implicitly from data, making deep networks hard to understand black boxes which are susceptible to dataset bias requiring huge data and compute resources. This workshop aims to remedy this gap by investigating how to flexibly pre-wire deep networks with generic visual innate knowledge structures, which allows to incorporate hard won existing knowledge from physics such as light reflection or geometry.

The great power of deep neural networks is their incredible flexibility to learn. The direct consequence of such power, is that small datasets can simply be memorized and the network will likely not generalize to unseen data. Regularization aims to prevent such over-fitting by adding constraints to the learning process. Much work is done on regularization of internal network properties and architectures. In this workshop we focus on regularization methods based on innate priors. There is strong evidence that an innate prior benefits deep nets: Adding convolution to deep networks yields a convolutional deep neural network (CNN) which is hugely successful and has permeated the entire field. While convolution was initially applied on images, it is now generalized to graph networks, speech, language, 3D data, video, etc. Convolution models translation invariance in images: an object may occur anywhere in the image, and thus instead of learning parameters at each location in the image, convolution allows to only consider local relations, yet, share parameters over all image locations, and thus saving a huge number of parameters to learn, allowing a strong reduction in the number of examples to learn from. This workshop aims to further the great success of convolution, exploiting innate regularizing structures yielding a significant reduction of training data.

Workshop program

Time Event Details
9:00 - 9:10 Introduction Speaker: dr. Jan van Gemert
9:10 - 9:55 Keynote (title TBD) Speaker: dr. Matthias Bethge
9:55 - 11:15 Oral presentations  
11:15 - 12:00 Coffee break & poster session  
12:00 - 12:45 Keynote 2 Speaker: TBD
12:45 - 13:00 Awards ceremony & closing  

Call for papers

We solicit submission that in the broad sense focus on data efficiency through visual inductive priors covering but not limited to the following topics:

  • Improving data efficiency of Deep Computer Vision methods using prior knowledge about the task domain
  • Analysis on the properties of Deep Learning representations as they relate to visual inductive priors
  • Transformation-equivariant image representations, e.g. scale-equivariance, rotation-equivariance, etc.
  • Color invariants/constants in Deep Learning
  • Object persistence between video frames
  • Shape-based representations for Deep Learning
  • Texture/shape bias in Convolutional Neural Networks
  • Alternative compact filter bases for Deep Learning
  • Capsule Networks

Important dates

  • Submissions open: March 1, 2020
  • Submission deadline: July 10, 2020
  • Notification of acceptance: July 24, 2020
  • Camera-ready deadline: September 15, 2020
  • Workshop: August 23, 2020

Submission guidelines

  • Submissions must be entered in OpenReview: link.
  • Submissions must follow the ECCV 2020 submission format.
  • Reviewing will be according to double-blind format.
  • Accepted papers will be published in ECCV 2020 Workshop proceedings.
  • Authors of accepted papers will be invited to present their work as a poster presentation at the workshop. Authors of a selection of papers will be invited to present their work orally at the workshop.

Posters

Authors that are publishing work at the main ECCV 2020 conference paper track are invited to present a poster of their work at our workshop. Please contact the organizers if you would like to present your ECCV 2020 paper at our workshop.

Important dates

  • Deadline: August 16, 2020

VIPriors Challenges

We present the “Visual Inductive Priors for Data-Efficient Computer Vision’’ challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set.

Please see the challenges page for submission instructions and deadlines.

Invited speakers

dr. Matthias Bethge

dr. Matthias Bethge

Bethge Lab

Website

More speakers are to be announced.

Organizers

dr. Jan van Gemert

dr. Jan van Gemert

Delft University of Technology

Website

dr. Anton van den Hengel

dr. Anton van den Hengel

University of Adelaide

Website

Attila Lengyel

Attila Lengyel

Delft University of Technology

Robert-Jan Bruintjes

Robert-Jan Bruintjes

Delft University of Technology

Website

Osman Semih Kayhan

Osman Semih Kayhan

Delft University of Technology

Marcos Baptista Rios

Marcos Baptista Rios

University of Alcala