A Functional Reboot for Deep Learning
In this talk, I want to begin a conversation about what is the essence of deep learning and how we can optimally support this essence in the form of a programming interface or language. I’ll give you my own impressions, and I hope to provoke an ongoing conversation. Despite the phenomenal success of deep learning, it’s my sense that most of the choices made in the theory and practice of deep learning are nonessential and even harmful (unnecessarily complex and limited). I’ll suggest that a very small addition to a modern typed functional programming language such as Haskell yields an ideal basis for deep learning that is much simpler, more general, and more rigorous that currently popular approaches.
Conal Elliott is a Distinguished Scientist at Target. Conal explores elegant and principled techniques from math and programming language theory for building fast, correct, and beautiful software, now with applications including machine learning and other large-scale optimization problems. Much of his current work is based on category theory, particularly automatic translation of Haskell programs into various categories for enhanced abilities such as automatic differentiation and for massively parallel execution on GPUs or FPGAs. Conal invented the paradigm now known as “functional reactive programming” in the early 1990s, and then pioneered compilation techniques for high-performance, high-level embedded domain-specific languages, with applications including 2D and 3D computer graphics.