Spring 2016

**Abstract**: We introduce the sparsified Cholesky and sparsified multigrid
algorithms for solving systems of linear equations. These algorithms
accelerate Gaussian elimination by sparsifying the nonzero matrix
entries created by the elimination process.

We use these new algorithms to derive the first nearly linear time
algorithms for solving systems of equations in connection Laplacians,
a generalization of Laplacian matrices that arise in many problems in
image and signal processing.

We also prove that every connection Laplacian has a linear sized
approximate inverse. This is an LU factorization with a linear number
of nonzero entries that is a strong approximation of the original
matrix. Using such a factorization one can solve systems of equations
in a connection Laplacian in linear time. Such a factorization was
unknown even for ordinary graph Laplacians.

Joint work with Rasmus Kyng, Yin Tat Lee, Sushant Sachdeva, and Daniel
Spielman. Manuscript at http://arxiv.org/abs/1512.01892.

**Abstract:** We consider the Widom-Rowlinson model of two types of interacting particles on

**Abstract:** We generalize the existing reduction mechanism due to Braun, Pokutta and Zink (2014)
for linear programming problems and semidefinite programming problems in two ways:

1) relaxing the requirement of affineness

2) extending to fractional optimization problems

As applications we prove several new LP-hardness and SDP-hardness
results, e.g., for the (non-uniform) Sparsest Cut problem with bounded treewidth
on the supply graph, the Balanced Separator problem with bounded treewidth on
the demand graph, the Max Cut problem and the Matching problem on 3-regular graphs.
We also provide a new, very strong Lasserre integrality gap
for the Independent Set problem, which is strictly greater than the
best known LP approximation, showing that the Lasserre hierarchy
does not always provide the tightest SDP relaxation.

Joint work with Gabor Braun and Sebastian Pokutta.

**Abstract:** I will give a tour of high-dimensional sampling
algorithms, both from a theoretical and applied perspective, for
generating random samples from a convex body. There are many
well-studied random walks to choose from, with many of them having
rigorous mixing bounds which say when the random walk has converged. We
then show that the techniques from theory yield state-of-the-art
algorithms in practice, where we analyze various organisms by randomly
sampling their metablic networks.

This is joint work with Ronan Fleming, Hulda Haraldsdottir, and Santosh Vempala.

**Abstract:** Stochastic programming is a powerful approach
for decision-making under uncertainty. Unfortunately, the solution may
be misleading if the underlying distribution of the involved random
parameters is not known exactly. In this talk, we study distributionally
robust stochastic programming (DRSP), in which the decision hedges
against the worst possible distribution that belongs to an ambiguity
set. More specifically, we consider the DRSP with the ambiguity set
comprising all distributions that are close to some reference
distribution in terms of Wasserstein distance. We derive a tractable
reformulation of the DRSP problem by constructing the worst-case
distribution explicitly via the first-order optimality condition of the
dual problem. Our approach has several theoretical and computational
implications. First, using the precise characterization of the
worst-case distribution, we show that the DRSP can be approximated by
robust programs to arbitrary accuracy, and thus many DRSP problems
become tractable with tools from robust optimization. Second, when the
objective is concave in the uncertainty, the robust-program
approximation is exact and equivalent to a saddle-point problem, which
can be solved by a Mirror-Prox algorithm. Third, our framework can also
be applied to problems other than stochastic programming, such as a
class of distributionally robust transportation problems. Furthermore,
we perform sensitivity analysis with respect to the radius of the
Wasserstein ball, and apply our results to the newsvendor problem,
two-stage linear program with uncertainty-affected recourse, and
worst-case Value-at-risk analysis.

**Abstract:** This talk aims to give a glimpse into the theory
of divisors on graphs in tropical geometry, and its recent application
in bijective combinatorics. I will start by introducing basic notions
and results of the subject. Then I will mention some of its connections
with other fields in math. Finally I will talk about my own work on how
tropical geometry leads to an unexpectedly simple class of bijections
between spanning trees of a graph and its sandpile group.

**Abstract:** A Ruzsa-Szemeredi graph is a graph on n
vertices whose edge set can be partitioned into induced matchings of
size cn. The study of these graphs goes back more than 35 years and has
connections with number theory, combinatorics, complexity theory and
information theory. In this talk we will discuss the history and some
recent developments in this area. In particular, we show that when
c>1/4, there can be only constantly many matchings. On the other
hand, for c=1/4, the maximum number of induced matchings is logarithmic
in n. This is joint work with Jacob Fox and Benny Sudakov.

**Abstract:** Motivated by neurally feasible computation, we
study Boolean functions of an arbitrary number of input variables that
can be realized by recursively applying a small set of functions with a
constant number of inputs each. This restricted type of construction is
neurally feasible since it uses little global coordination or control.
Valiantâ€™s construction of a majority function can be realized in this
manner and, as we show, can be generalized to any uniform threshold
function. We study the rate of convergence, finding that while linear
convergence to the correct function can be achieved for any threshold
using a fixed set of primitives, for quadratic convergence, the size of
the primitives must grow as the threshold approaches 0 or 1. We also
study finite realizations of this process, and show that the
constructions realized are accurate outside a small interval near the
target threshold, where the size of the construction at each level grows
as the inverse square of the interval width. This phenomenon, that
errors are higher closer to thresholds (and thresholds closer to the
boundary are harder to represent), is also a well-known cognitive
finding. Finally, we give a neurally feasible algorithm that uses
recursive constructions to learn threshold functions. This is joint work
with Christos Papadimitriou and Santosh Vempala.

**Abstract:** We examine a variant of the knapsack problem in
which item sizes are random according to an arbitrary but known
distribution. In each iteration, an item size is realized once the
decision maker chooses and attempts to insert an item. With the aim of
maximizing the expected profit, the process ends when either all items
are successfully inserted or a failed insertion occurs. We investigate
the strength of a particular dynamic programming based LP bound by
examining its gap with the optimal adaptive policy. Our new relaxation
is based on a quadratic value function approximation which introduces
the notion of diminishing returns by encoding interactions between
remaining items. We compare the bound to previous bounds in literature,
including the best known pseudopolynomial bound, and contrast their
corresponding policies with two natural greedy policies. Our main
conclusion is that the quadratic bound is theoretically more efficient
than the pseudopolyomial bound yet empirically comparable to it in both
value and running time.

Joint work with Alejandro Toriello.

**Abstract:** We initiate the study of dynamic algorithms for
graph sparsification problems and obtain fully dynamic algorithms,
allowing both edge insertions and edge deletions, that take
polylogarithmic time after each update in the graph. Our three main
results are as follows. First, we give a fully dynamic algorithm for
maintaining a (1 \pm \epsilon)-spectral sparsifier with amortized update
time poly(\log{n},\epsilon^{-1}). Second, we give a fully dynamic
algorithm for maintaining a (1 \pm \epsilon)-cut sparsifier with
worst-case update time poly(\log{n},\epsilon^{-1}). Third, we apply our
dynamic sparsifier algorithm to obtain a fully dynamic algorithm for
maintaining a (1 + \epsilon)-approximate minimum cut in an unweighted,
undirected, bipartite graph with amortized update time
poly(\log{n},\epsilon^{-1}).

Joint work with Ittai Abraham, Ioannis Koutis, Sebastian Krinninger, and Richard Peng.