Workshop on Machine Learning and Quantum Computation

June 26th, 2019 Marseille

Aix-Marseille University, LIS

Topic

The focus on fast and efficient computation models in machine learning community in the past decade has expanded at a high rate. This is of paramount importance from both theoretical and practical point of views since it greatly improves the cost/benefit of learning, especially in this era of `Big Data'. By the same token, several major areas of computational science, such as quantum computing, have enjoyed significant growth due to the applications arising in machine learning.

The goal of this workshop is to bring together researchers interested in all aspects of quantum computation models and their use in machine learning. It promotes the cross-fertilizing exchange of ideas and experiences among researchers from the communities of machine learning and quantum computing interested in the foundations and applications of quantum computation models and machine learning.

The workshop will consist of fourth invited talks. The talks by the invited speakers are intended to cover some computational models in machine learning and quantum computing.

Schedule


14:00 - 14:10 Opening remarks Organizers
14:10 - 15:00 Invited Talk: Towards Deeper Understanding of Deep Learning Stéphane Ayache
15:00 - 15:30 Invited Talk: Deep Networks with Adaptive Nyström Approximation Luc Giffon
15:30 - 16:00 Coffee Break
16:00 - 16:50 Invited Talk: Quantum tensor networks: from condensed matter to machine learning Benoît Grémaud
16:50 - 17:40 Invited Talk: Quantum Support Vector Machines Anupam Prakash

Invited Speakers

  • Stéphane Ayache

    Aix-Marseille Université (AMU)

    Towards Deeper Understanding of Deep Learning


    Slides

    This last years, major advancement has been made in Machine Learning thanks to a revival of Neural Networks. Recent advances in Deep Learning give rise to many technological revolutions in various fields, that make it possible to upset a large part of our society and everyday's life. Deep Learning models are indeed really powerful, but one still need to improve the understanding of such models and architectures to face recent societal challenges. After a wide introduction on NN concepts, this talk goes deeper in the understanding of Convolution layers through simple tools from linear algebra. We then introduce weighted convolutions, and show few preliminary experiments on a transfer learning task. Deep Learning is not as easy as 3 lines of code, understanding it well will conduct to better architectures and explainable models.

    Luc Giffon

    Aix-Marseille Université (AMU)

    Deep Networks with Adaptive Nyström Approximation


    Slides

    Recent work has focused on combining kernel methods and deep learning to exploit the best of the two approaches. Here, we introduce a new architecture of neural networks in which we replace the top dense layers of standard convolutional architectures with an approximation of a kernel function by relying on the Nyström approximation. Our approach is easy andhighly flexible. It is compatible with any kernel function and it allows exploiting multiple kernels. We show that our architecture has the same performance than standard architecture on datasets like SVHN and CIFAR100. One benefit of the method lies in its limited number of learnable parameters which makes it particularly suited for small training set sizes, e.g. from 5 to 20 samples per class.

    Benoît Grémaud

    Centre de Physique Théorique (CPT)

    Quantum tensor networks: from condensed matter to machine learning


    First, I will review some of the main aspects of quantum many-body systems (entanglement and area law), emphasizing the origin of the complexity of computing the properties of these systems. Then, I will explain how the different numerical tools developed within the tensor network framework, such as the DMRG (density matrix renormalization group), MPS (matrix product states), can help solving these difficulties. Finally, I will shortly review some of the possible applications of the tensor networks in machine learning.

    Anupam Prakash

    Institut de recherche en informatique fondamentale (IRIF)

    Quantum Support Vector Machines


    Slides

    We present a quantum algorithm for support vector machines (SVMs) that can potentially achieve significant speedups. The algorithm is based on a quantum interior point method for second order cone programs (SOCPs), a class of structured convex optimization problems which generalize quadratic programs. I will also introduce techniques from quantum linear algebra and machine learning that are required for this algorithm.

Registration

(free but mandatory)

Venue

The workshop will be hosted in Marseille at FRUMAM.

F.R.U.M.A.M. - Fr 2291 - CNRS
Aix Marseille Université - CS 80249
3, place Victor Hugo - case 39
13331– MARSEILLE Cedex 03

Workshop Organizers

  • Giuseppe Di Molfetta

    University of Aix-Marseille

    Hachem Kadri

    University of Aix-Marseille