Gradient Boosting Machines: Structural Insights and Improved Algorithms

Duke Computer Science/Mathematics Colloquium
Speaker Name
Haihao Lu
Date and Time
-
Location
LSRC D106
Notes
Lunch served at 11:45 am.
Abstract

The gradient boosting machine (GBM) is one of the most successful supervised learning algorithms, and it has been the dominant method in many data science competitions, including Kaggle and KDDCup. In spite of its practical success, there has been a huge gap between practice and theoretical understanding.  In this line of research, we show that GBM can be interpreted as a greedy coordinate descent method in the coefficient space and/or a mirror descent method in the “pseudo-residual” space.  Armed with this structural insight, we develop two new algorithms for classification in the context of GBM:  (i) the Random-then-Greedy Gradient Boosting Machine (RtGBM), which lowers the cost per iteration and achieves improved performance in theory as well as practice; and (ii) the Accelerated Gradient Boosting Machine (AGBM), which achieves the computational efficiency of acceleration schemes in general, again both in theory and in practice.  These two algorithms are currently being incorporated by Google into their TensorFlow Boosted Trees software.

Short Biography

Haihao (Sean) Lu is a fifth year dual Ph.D student in Operations Research and Applied Mathematics at MIT, advised by Prof. Robert Freund. Prior to joining MIT, he graduated from Shanghai Jiao Tong University with a B.S in Applied Mathematics. He has worked as a student researcher and as a summer intern at Google (two summers) and IBM (one summer). His research interests are at the intersection of optimization and machine learning, with a focus on developing both theoretical analysis and real computational tools for challenging “big data” problems.

Host
Xiaobai Sun & Jianfeng Lu