Dino Sejdinovic (University of Oxford, UK)
Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting probability metric, are useful tools for fully nonparametric hypothesis testing and for learning on distributional inputs, i.e. where labels are only observed at an aggregate level. I will give an overview of this framework and describe the use of large-scale approximations to kernel embeddings in the context of Bayesian approaches to learning on distributions and in the context of distributional covariate shift, e.g. where measurement noise on the training inputs differs from that on the testing inputs.