I am a 3rd year CS PhD student in the Theory Group at Yale University, where I am extremely fortunate to be advised by Quanquan C. Liu. Currently, I am also a student researcher at Google Research in the NYC Algorithms & Optimization Group hosted by Vincent Cohen-Addad and Alessandro Epasto.
I am grateful to be supported by an NSERC Postgraduate Scholarship.
Research Interests: I am broadly interested in the theory and practice of algorithms for large data and different notions of algorithmic stability. Examples include parallel graph algorithms, differentially private (DP) graph algorithms, DP learning algorithms, replicable learning algorithms, and learning under structured biases.
Email: felix [dot] zhou [at] yale [dot] edu
My amazing collaborators (in no particular order): Mike Dinitz, George Z. Li, Samson Zhou, Vincent Cohen-Addad, Manolis Zampetakis, Grigoris Velegkas, Yuichi Yoshida, Tamalika Mukherjee, Alessandro Epasto, Alkis Kalavasis, Anay Mehrotra, Kasper Green Larsen, Amin Karbasi, Lin F. Yang, Vahab Mirrokni, Chaitanya Swamy, Jochen Koenemann, W. Justin Toth
Personal: My other half, Jane Shi, studies number theory at MIT.
Previously, I was an undergraduate student at the University of Waterloo, where I was fortunate to be advised by Jochen Koenemann and Chaitanya Swamy. I worked on combinatorial optimization, approximation algorithms, and algorithmic game theory.
I interned at Hudson River Trading as an algorithm developer. Previously, I interned at HomeX, where I worked on an online stochastic reservation problem. Even earlier, I interned at the Google Mountain View office, where I worked on distributed graph algorithms.
The Power of Graph Sparsification in the Continual Release Model with Alessandro Epasto, Quanquan C. Liu, Tamalika Mukherjee [preprint] [poster]
Pointwise Lipschitz Continuous Graph Algorithms via Proximal Gradient Analysis with Quanquan C. Liu, Grigoris Velegkas, Yuichi Yoshida [preprint] [slides]
On the Computational Landscape of Replicable Learning with Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas to appear in NeurIPS, 2024. [preprint]
Replicable Learning of Large-Margin Halfspaces with Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas to appear in ICML, 2024. Spotlight (top 3.5% of accepted papers) [preprint] [slides]
Replicability in Reinforcement Learning with Amin Karbasi, Grigoris Velegkas, Ling F. Yang NeurIPS, 2023. [preprint] [video]
Replicable Clustering with Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni, Grigoris Velegkas NeurIPS, 2023. [preprint] [slides] [video]
On the Complexity of Nucleolus Computation for Bipartite b-Matching Games with Jochen Koenemann, Justin Toth SAGT, 2021. Special Issue [preprint] [slides] [video]
Notes typeset for courses and from self-studying. Errors are abundant. Please use at your own discretion.