Jin XU

I am a second-year Ph.D. student in Statistical Machine Learning at the University of Oxford, supervised by Prof. Yee Whye Teh. Before joining Oxford OxCSML, I received a B.Sc. in Mathematics and Applied Mathematics from Fudan Unveristy and a M.Sc. in Artificial Intelligence from the University of Edinburgh. I am currently interested in deep generative models, meta learning and representation learning.

Github / Google Scholar / Twitter /

Publications

MetaFun: Meta-Learning with Iterative Functional Updates Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh International Conference on Machine Learning (ICML) 2020 [arxiv] [bibtex] [code] [slides]

We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.


Preprints

Controllable Semantic Image Inpainting Jin Xu, Yee Whye Teh arXiv:1806.05953 2018 [arxiv] [bibtex] [code]

We develop a method for user-controllable semantic image inpainting: Given an arbitrary set of observed pixels, the unobserved pixels can be imputed in a user-controllable range of possibilities, each of which is semantically coherent and locally consistent with the observed pixels. We achieve this using a deep generative model bringing together: an encoder which can encode an arbitrary set of observed pixels, latent variables which are trained to represent disentangled factors of variations, and a bidirectional PixelCNN model. We experimentally demonstrate that our method can generate plausible inpainting results matching the user-specified semantics, but is still coherent with observed pixels. We justify our choices of architecture and training regime through more experiments.