Equivariance and algebraic priors in neural networks
Date:
Seminar , University of Cambridge
‘Sandwich’ seminar series at CLASH (Cambridge Logic And Semantics Hub)
Abstract:
Neural networks are “black boxes” that progressively transform an input into vector representations that somehow preserve the main semantic information of the data. In this talk we will discuss how equivariance is used in neural network design as a principled way to introduce “inductive biases” (belief we have about the problem a hand). Then, we will look at approximate equivariance methods and how they interact with optimisation. Finally, I will present a recent work that uses these concepts to study the algebraic structure of neural network representations.
