photo of me, with a blurry space needle in the background!

I recently obtained a Masters in Computer Science from Georgia Tech. I'm currently seeking full-time opportunities, so check out my resume! My interests include computer graphics, numerical methods, and machine learning. My research advisor was Jacob Abernethy.

Previously, I studied math at the University of Michigan, where I also helped with a machine learning class and reading group. Off-campus I like to bike, cook, and play piano. Maybe one day I'll make more games.

If you'd like to chat, feel free to write me at benrbray@gmail.com.

Recent Updates

Spring 2020 Teaching assistant for ISYE 6740: Computational Data Analysis with Prof. Yao Xie
Fall 2019 Teaching assistant for CS 4540: Advanced Algorithms with Prof. Jacob Abernethy
August 2019 Received MS in Computer Science (High Performance Computing) from Georgia Tech
July 2019 Attended the Discrete Optimization & Machine Learning Workshop at RIKEN AIP
Summer 2019 Internship at RIKEN AIP in Tokyo on the Approximate Bayesian Inference team
Fall 2018 Teaching assistant for CS 4540: Advanced Algorithms (F18)
May 2018 Visited the Data Science & Online Markets group at Northwestern

Projects

Selected Writing

Algorithms for Random Discrete Structures

Many applications require the random sampling of matrices with prescribed structure for modeling, statistical, or aesthetic purposes. What does it mean for a random variable to be matrix-valued? What can we say about the eigenvalues of a random matrix? How can we design algorithms to sample from a target distribution on a group or manifold? More generally, what can we say deterministic algorithms with random inputs? Our study of random matrices will lead us to the subgroup algorithm (Diaconis 1987), which subsumes many familiar random sampling procedures.

Expectation Maximization

These notes provide a theoretical treatment of Expectation-Maximization, an iterative parameter estimation algorithm used to find local maxima of the likelihood function in the presence of hidden variables. Introductory textbooks (MLAPP, PRML) typically state the algorithm without explanation and expect students to work blindly through derivations. We find this approach to be unsatisfying, and instead choose to tackle the theory head-on, followed by plenty of examples. Following (Neal & Hinton 1998), we view expectation-maximization as coordinate ascent on the Evidence Lower Bound. This perspective takes much of the mystery out of the algorithm and allows us to easily derive variants like Hard EM and Variational Inference.