View on GitHub

Autodiff Workshop

The future of gradient-based machine learning software and techniques, NIPS 2017

News

Video recordings of last year's workshop are now online, see this playlist or individual links on the 2016 schedule.

Submissions for talks and posters are now open.

Abstract

Many algorithms in machine learning, computer vision, physical simulation, and other fields require the calculation of gradients and other derivatives. Manual derivation of gradients can be time consuming and error-prone. Automatic differentiation comprises a set of techniques to calculate the derivative of a numerical computation expressed as a computer program. These techniques are commonly used in atmospheric sciences and computational fluid dynamics, and have more recently also been adopted by machine learning researchers.

Practitioners across many fields have built a wide set of automatic differentiation tools, using different programming languages, computational primitives and intermediate compiler representations. Each of these choices comes with positive and negative trade-offs, in terms of their usability, flexibility and performance in specific domains.

This workshop will bring together researchers in the fields of automatic differentiation and machine learning to discuss ways in which advanced automatic differentiation frameworks and techniques can enable more advanced machine learning models, run large-scale machine learning on accelerators with better performance, and increase the usability of machine learning frameworks for practitioners. Topics for discussion will include:

Call for submissions

We are soliciting contributions demonstrating work that helps or could help bridging the gap between the AD community and the developers and users of ML software.

Submissions can be:

Submissions should consist in 2 to 4 pages extended abstracts in NIPS format, they do not need to be anonymized. Please submit your abstracts at https://openreview.net/group?id=NIPS.cc/2017/Workshop/Autodiff.

Up to 4 submissions will be selected as contributed 30-minute talks (40 minutes including questions). Depending on the number of quality submissions, some will be selected as posters.

Abstracts will be accessible from this website, but no proceedings will be published, the workshop is considered non-archival.

Important dates (updated)

Tentative schedule

The workshop will take place on Saturday, December 9th, 2017.

We have two invited keynote speakers, and plan to offer four other speaking slots to workshop submissions. The day will conclude with a panel discussion, with questions to be focused on how the automatic differentiation and machine learning fields can collaborate and cross-pollinate each other with ideas and research problems.

Time Activity
9:00am – 9:10am Introduction and opening remarks
9:10am – 9:50am Atılım Güneş Baydin – Beyond backprop: automatic differentiation in machine learning
9:50am – 10:30am Speaker selected from workshop submissions
10:30am – 11:00am Coffee break
11:00am – 11:40am Speaker selected from workshop submissions
11:40am – 1:40pm Poster session and lunch break
1:40pm – 2:20pm Jean Utke – 30 years of work on automatic differentiation: advanced autodiff techniques
2:20pm – 3:00pm Speaker selected from workshop submissions
3:00pm – 3:30pm Coffee break
3:30pm – 4:10pm Speaker selected from workshop submissions
4:10pm – 5:10pm Panel discussion with speakers
5:10pm End

About the speakers

TBD

About us

Alex Wiltschko (@alexbw) is a research scientist at Google Brain, focusing on building more flexible machine learning software systems, and also applications of machine learning to biology. Previously, he was a core developer of torch-autograd, an automatic differentiation library used for both research and production at Twitter. He completed his PhD in Neurobiology at Harvard, focusing on quantifying behavior and body language using depth cameras and nonparametric time-series modeling.

Bart van Merriënboer (@bartvm) is a PhD student at MILA (the Montreal Institute for Learning Algorithms) under the supervision of Yoshua Bengio, and a research engineer with Google Brain in Montreal. His work focuses on the application of deep learning to natural language processing and the development of machine learning tools and frameworks. He previously interned at Google Brain, Facebook AI Research, and Twitter, and contributed to Theano, Torch, torch-autograd, and Blocks/Fuel.

Pascal Lamblin (@lamblin) is a software analyst at MILA. After completing an engineering degree at École Centrale Paris, he has done some research under the supervision of Yoshua Bengio at Université de Montréal, and is now working on the development of Theano.

This workshop follows up on last year's Autodiff workshop. They more generally stem from prior workshops on tooling in machine learning, such as:

However, our focus shifts from specific infrastructural and engineering challenges towards the most enabling programming abstractions in machine learning.