Differentiation of Blackbox Combinatorial Solvers


Problems that are inherently combinatorial still remain a hinderance for classical deep learning methods. Traditional methods that try to do gradient propagation through combinatorial solvers rely on sample-based estimates or solver relaxations. We show that for a specific class of solver, we are able to efficiently compute gradients of an implicit piecewise-linear interpolation of the objective. This allows us to achieve unprecedented generalization performance on representation learning tasks with combinatorial flavor.