Grounding Continuous Representations in Geometry
Equivariant Neural Fields

Summary
We introduce Equivariant Neural Fields (ENFs), a new class of Conditional Neural Fields that ground continuous representations in geometry. A signal $f_i$ is represented as a localized set of tuples $z:=\{(p_i, \mathbf{c}_i)\}_{i=1}^N$ in a latent space, each of which consist of a pose $p_i$ and a context vector $c_i$. The latent space can be equipped with a group action, making the representation equivariant to transformations of choice - a desirable property in many learning tasks.
We show that ENF representations - trained fully self-supervised - are geometrically interpretable and editable. We demonstrate their effectiveness as a representation in downstream tasks, improving downstream image classification over existing methods.
Resources
If you would like to learn more about ENFs, we compiled a list of resources that you might find useful.
Notebook: Implementing ENF from scratch for classification
enf-min-jax: A minimal implementation easily extensible for your own research
Applications of ENFs:
Any other questions, comments or criticisms? Feel free to reach out to us over email (d.r.wessels@uva.nl, d.m.knigge@uva.nl) or on Twitter (@dafidofff, @davidmknigge) we're always down to chat about equivariance, neural fields, or anything else that piques your interest.
Citing
If you found this work useful, and should you like to reference it, please use the following citation.
@article{wessels2024grounding,
title={Grounding Continuous Representations in Geometry: Equivariant Neural Fields},
author={Wessels, David R and Knigge, David M and Papa, Samuele and Valperga, Riccardo
and Vadgama, Sharvaree and Gavves, Efstratios and Bekkers, Erik J},
journal={arXiv preprint arXiv:2406.05753},
year={2024}
}