Center for Multimodal Neuroimaging

AI and Deep Learning

January 2018

Deep learning has arisen around 2006 as a renewal of neural networks research allowing such models to have more layers. It is leading the charge of a renewal of AI both inside and outside academia, with billions of dollars being invested and expected fallouts in the trillions by 2030. Theoretical investigations have shown that functions obtained as deep compositions of simpler functions (which includes both deep and recurrent nets) can express highly varying functions (with many ups and downs and different input regions that can be distinguished) much more efficiently (with fewer parameters) than otherwise. Empirical work in a variety of applications has demonstrated that, when well trained, such deep architectures can be highly successful, remarkably breaking through previous state-of-the-art in many areas, including speech recognition, object recognition, playing games, language models, machine translation and transfer learning. In terms of social impact, the most interesting applications probably are in the medical domain, starting with medical image analysis but expanding to many other areas. Finally, we summarize some of the recent work aimed at bridging the remaining gap between deep learning and neuroscience, including approaches to implement functional equivalents to backpropagation in a more biologically plausible way, as well as ongoing work connecting language, cognition, reinforcement learning and the learning of abstract representations.