Optimization Theory
Developing efficient algorithms for convex and non-convex optimization problems with applications in machine learning, operations research, and control systems.
Research Domain
Optimization, learning theory, and graph flows that compress complexity into elegance. We design algorithms that solve hard problems with mathematical precision.
Developing efficient algorithms for convex and non-convex optimization problems with applications in machine learning, operations research, and control systems.
Novel approaches to network flow problems, shortest paths, and graph partitioning that scale to massive real-world datasets.
Theoretical foundations of machine learning, including generalization bounds, sample complexity, and convergence analysis of learning algorithms.
Designing algorithms with provable performance guarantees for computationally hard problems, balancing accuracy with efficiency.
A novel optimization algorithm that adapts to problem structure, achieving faster convergence on non-convex landscapes.
Read paper →Scalable framework for processing billion-edge graphs with near-linear time complexity.
Read paper →Algorithms that maintain performance under distribution shift and adversarial perturbations.
Read paper →We partner with leading universities and research institutions worldwide on fundamental algorithmic research.