This position is open to undergraduate, MS, and Ph.D. students. Select “Neural Combinatorial and Constrained Optimization” for the question “Which of the following match your research interests?” in the Google Form. Each team will be led by a Ph.D. student.


Neural Combinatorial and Constrained Optimization

Project Overview

We invite motivated undergraduate, MS, and Ph.D. researchers to join a project on Neural Combinatorial and Constrained Optimization, focusing on learning-based approaches for discrete and NP-hard problems.

The project aims to build neural solvers that learn search strategies, constraint handling, and decision heuristics, complementing or surpassing classical optimization techniques in scalability and generalization.


Research Focus Areas

1. Neural Architectures for Structured Optimization

  • Developing scalable Graph Neural Networks (GNNs) and attention mechanisms for combinatorial reasoning.
  • Designing architectures that generalize to large-scale instances and varying graph topologies.
  • Benchmarking neural solvers against classical heuristics on routing, scheduling, and coverage tasks.

2. Constrained Learning & Hybrid Solvers

  • Integrating hard constraints via differentiable optimization layers and implicit differentiation.
  • Creating hybrid algorithms that combine deep learning with classical methods (e.g., Branch-and-Bound).
  • Learning to satisfy complex logic and discrete constraints while optimizing objectives.

What You’ll Work On

  • Design neural architectures for optimization problems
  • Develop training algorithms that respect problem constraints
  • Benchmark against classical solvers and analyze trade-offs
  • Apply methods to real-world optimization problems (scheduling, routing, planning)
  • Build scalable implementations that handle large problem instances

What We’re Looking For

Essential:

  • Strong background in algorithms and optimization
  • Proficiency in Python and deep learning frameworks (PyTorch, Jax)
  • Understanding of graph algorithms, dynamic programming, and complexity theory
  • Experience with reproducible research practices
  • Experience with classical optimization (integer programming, constraint programming)

Nice-to-have:

  • Familiarity with graph neural networks or attention mechanisms
  • Background in operations research or algorithmic game theory
  • Publications in ML/AI or optimization venues

What You’ll Gain

  • Deep expertise in neural optimization algorithms
  • Experience with state-of-the-art deep learning and optimization techniques
  • Understanding of both theoretical and practical aspects of optimization
  • Opportunities to publish at top-tier venues (NeurIPS, ICML, ICLR, AAAI, etc.)

How To Apply

Please submit your details using the Google Form.

Note: Choose “Neural Combinatorial and Constrained Optimization” for the question “Which of the following match your research interests?” in the google form.

Selected students may be invited for a brief meeting to discuss fit and potential directions.


For lab resources, university information, and application details, see the main hiring page. ← Back to Main Hiring Page