Center for Multimodal Neuroimaging

Yoshua Bengio

University of Montreal

Yoshua Bengio (computer science, 1991, McGill U; post-docs at MIT and Bell Labs, computer science professor at U. Montréal since 1993), he authored three books, over 300 publications (h-index over 100), mostly in deep learning, holds a Canada Research Chair in Statistical Learning Algorithms, is Officer of the Order of Canada, recipient of the Marie-Victorin Quebec Prize 2017, he is a CIFAR Senior Fellow and co-directs its Learning in Machines and Brains program. He heads the Montreal Institute for Learning Algorithms (MILA), currently the largest academic research group on deep learning. On the NIPS foundation board (previously program chair and general chair), he co-created the ICLR conference. His goal is to uncover the principles of intelligence centered on learning, as well as contribute to the development of AI for the benefit of all.

Recent Talks
Events in January 2018

AI and Deep Learning

Deep learning has arisen around 2006 as a renewal of neural networks research allowing such models to have more layers. It is leading the charge of a renewal of AI both inside and outside academia, with billions of dollars being invested and expected fallouts in the trillions by 2030. Theoretical investigations have shown that functions obtained as deep compositions of simpler functions (which includes both deep and recurrent nets) can express highly varying functions (with many ups and downs and different input regions that can be distinguished) much more efficiently (with fewer parameters) than otherwise. Empirical work in a variety of applications has demonstrated that, when well trained, such deep architectures can be highly successful, remarkably breaking through previous state-of-the-art in many areas, including speech recognition, object recognition, playing games, language models, machine translation and transfer learning. In terms of social impact, the most interesting applications probably are in the medical domain, starting with medical image analysis but expanding to many other areas. Finally, we summarize some of the recent work aimed at bridging the remaining gap between deep learning and neuroscience, including approaches to implement functional equivalents to backpropagation in a more biologically plausible way, as well as ongoing work connecting language, cognition, reinforcement learning and the learning of abstract representations.