Felix Zhou 周长风

I am a 3rd year CS PhD student in the Theory Group at Yale University, where I am fortunate to be advised by Quanquan C. Liu. Currently, I am also a student researcher at Google Research in the Algorithms and Optimization Group hosted by Vincent Cohen-Addad and Alessandro Epasto.

I am grateful to be supported by an NSERC Postgraduate Scholarship.

Research Interests: I am broadly interested in the theory and practice of algorithms for large data and different notions of algorithmic stability. Examples include graph algorithms beyond the static setting and learning algorithms beyond the i.i.d. assumption.

I am currently thinking about problems in the following topics:

  1. Online, streaming, and dynamic graph algorithms, possibly under privacy/Lipschitz constraints.
  2. Learning algorithms for coarsened/truncated data, possibly under privacy/replicability constraints.
  3. Statistical/computational connections between different notions of stability.
  4. Fair allocation in cooperative combinatorial optimization games.

Email: felix [dot] zhou [at] yale [dot] edu

My amazing collaborators (in no particular order): Samson Zhou, Vincent Cohen-Addad, Grigoris Velegkas, Yuichi Yoshida, Tamalika Mukherjee, Alessandro Epasto, Alkis Kalavasis, Anay Mehrotra, Kasper Green Larsen, Amin Karbasi, Lin F. Yang, Vahab Mirrokni, Chaitanya Swamy, Jochen Koenemann, W. Justin Toth

Personal: My other half, Jane Shi, studies number theory at MIT.

About Me

Previously, I was an undergraduate student at the University of Waterloo, where I was fortunate to be advised by Jochen Koenemann and Chaitanya Swamy. I worked on combinatorial optimization, approximation algorithms, and algorithmic game theory.

I interned at Hudson River Trading as an algorithm developer. Previously, I interned at HomeX, where I worked on an online stochastic reservation problem. Even earlier, I interned at the Google Mountain View office, where I worked on distributed graph algorithms.

Preprints

The Power of Graph Sparsification in the Continual Release Model with Alessandro Epasto, Quanquan C. Liu, Tamalika Mukherjee [preprint] [poster]

Pointwise Lipschitz Continuous Graph Algorithms via Proximal Gradient Analysis with Quanquan C. Liu, Grigoris Velegkas, Yuichi Yoshida [preprint] [slides]

On the Computational Landscape of Replicable Learning with Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas to appear in NeurIPS, 2024. [preprint]

Replicable Learning of Large-Margin Halfspaces with Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas to appear in ICML, 2024. Spotlight (top 3.5% of accepted papers) [preprint] [slides]

Publications

Replicability in Reinforcement Learning with Amin Karbasi, Grigoris Velegkas, Ling F. Yang NeurIPS, 2023. [preprint] [video]

Replicable Clustering with Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni, Grigoris Velegkas NeurIPS, 2023. [preprint] [slides] [video]

On the Complexity of Nucleolus Computation for Bipartite b-Matching Games with Jochen Koenemann, Justin Toth SAGT, 2021. Special Issue [preprint] [slides] [video]

Notes

Notes typeset for courses and from self-studying. Errors are abundant. Please use at your own discretion.

Yale

UWaterloo