We solicit submissions that in the broad sense focus on achieving data efficiency through incorporating prior knowledge of the visual domain into network design. This includes the following topics:
- Pre-wired invariance/equivariance to symmetries, such as translation, rotation, scaling, etc.
- Parameter sharing for data efficiency
- (Meta-)learning symmetries from data
- Unsupervised learning through visual priors, e.g. contrastive learning
- Color invariants/constants in Deep Learning
- Data augmentation
- Human vision as an inspiration for data-efficient vision algorithms, e.g. reducing texture bias in Convolutional Neural Networks, modeling network operations after human vision, etc.
- Alternative data-efficient operators for Deep Learning inspired by visual inductive priors, e.g. alternative compact filter bases, Capsule Networks, etc.
Please note that this list is not exhaustive! We strongly encourage novel approaches to data efficient methods using visual prior knowledge.
5. Tune It or Don’t Use It: Benchmarking Data-Efficient Image Classification [Poster]
Lorenzo Brigato, Björn Barz, Luca Iocchi, Joachim Denzler
6. Relational Prior for Multi-Object Tracking [Poster]
Artem Moskalev, Ivan Sosnovik, Arnold W.M. Smeulders
7. Predictive Coding with Topographic Variational Autoencoders [Poster]
T. Anderson Keller, Max Welling
All works presented orally will also be presented as a poster.
1. LSD-C: Linearly Separable Deep Clusters [Poster]
Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, Andrew Zisserman
2. Multimodal Continuous Visual Attention Mechanisms [Poster]
António Farinhas, Andre Martins, Pedro M. Q. Aguiar
3. Self-supervised Visual Attribute Learning for Fashion Compatibility [Poster]
Donghyun Kim, Kuniaki Saito, Samarth Mishra, Stan Sclaroff, Kate Saenko, Bryan A. Plummer
4. Few-shot Learning with Online Self-Distillation [Poster]
Yue Wang, Sihan Liu
8. How to Transform Kernels for Scale-Convolutions [Poster]
Ivan Sosnovik, Artem Moskalev, Arnold W.M. Smeulders
9. ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets [Poster]
Vitaliy Kinakh, Slava Voloshynovskiy, Olga Taran
10. Deep Manifold Prior [Poster]
Matheus Gadelha, Rui Wang, Subhransu Maji
External poster track
11. Background Invariant Classification by Reducing Texture Bias in CNNs [Poster]
Maliha Arif, Calvin Yong, Abhijit Mahalanobis
12. Data augmentation in Bayesian neural networks and the cold posterior effect [Poster]
Seth Nabarro, Stoil Ganev, Adrià Garriga-Alonso, Vincent Fortuin, Mark van der Wilk, Laurence Aitchison
Submissions are sollicited through OpenReview. Reviewing will be double-blind. We do not have a discussion/rebuttal period. The top accepted papers will be invited to orally present their work during the workshop.
Accepted papers will be published in ICCV 2021 Workshop proceedings. Submissions must follow the ICCV 2021 submission format. Optional supplementary material can be submitted through OpenReview (single .zip file, maximum of 50MB). The deadline for supplementary material is the same as the paper submission deadline.
(ICCV paper notifications: July 22nd, 2021) Submission deadline: July 25th, 2021, 23:59 GMT Author notifications: August 7th, 2021 CMT submission deadline: August 10th, 2021, 23:59 GMT Camera-ready deadline: August 17th, 2021, 23:59 GMT
Please note: we will require authors of accepted papers to submit their paper on CMT as well. This is because we found out too late that ICCV requires us to use CMT instead of OpenReview because of proceedings copyright requirements. Please be prepared to submit your accepted paper (not the camera-ready version) to CMT ASAP after notification. You will then receive instructions on preparing the camera-ready version through CMT. Please find our CMT site here: https://cmt3.research.microsoft.com/VIPriors2021.
Call for posters
Authors of recent and relevant works (including works accepted at the main ICCV 2021 conference paper track) are invited to present a poster of their work at our workshop. Please apply by sending an email to email@example.com before August 14th, 2021.