## Algorithms & Complexity Seminar, MIT : Fall 2022
The Algorithms & Complexity Seminar will usually (unless otherwise stated) meet on ## Upcoming Talks-
**Tuesday, November 29, 2022**:**Lijie Chen**(UC Berkeley). [Time: 12-1pm: 32-G575]
**Topic.**Truly Low-Space Element Distinctness and Subset Sum via Pseudorandom Hash Functions**Abstract.**We consider low-space algorithms for the classic Element Distinctness problem: given an array of n input integers with O(log n) bit-length, decide whether or not all elements are pairwise distinct. For comparison-based algorithms (i.e., one can only compare two input integers and receive three possible outcomes), classical works [Borodin et al. 1987] and [Yao 1994] proved that essentially n^2 time are required for algorithms with polylog(n) space. Surprisingly, in 2013, Beame, Clifford, and Machmouchi gave a tilde{O}(n^{1.5})-time randomized algorithm for Element Distinctness using only O(log n) bits of working space. However, their algorithm assumes a random oracle (in particular, read-only random access to polynomially many random bits), and it was asked as an open question whether one can remove this assumption. More generally, they showed that for every S(n) >= O(log n), there is an S(n)-space and tilde{O}(n^{1.5} / sqrt{S(n)})-time algorithm for Element Distinctness. In 2021, Chen, Jin, Williams, and Wu [SODA 2022] positively answered this question by giving a tilde{O}(n^{1.5})-time randomized algorithm using O(log^3n loglog n) bits of space, with one-way access to random bits. As a corollary, they also obtained a poly(n)-space 2^{0.86n}-time randomized algorithm for the Subset Sum problem, removing the random oracles required in the algorithm of Bansal, Garg, Nederlof, and Vyas [STOC 2017]. The main technique underlying their result is a pseudorandom hash family based on iterative restrictions, which can fool the cycle-finding procedure in the algorithms of Beame et al. and Bansal et al. The original analysis of Chen et al. was quite involved and does not generalize to the general trade-off of Beame et al. for every S(n) >= O(log n). Recently, a subsequent work by Lyu and Zhu [SODA 2023] greatly simplified the analysis of Chen et al. (while still using the same pseudorandom hash family) and extended their results to the general time-space trade-off. In this talk, I will present the simpler analysis by Lyu and Zhu. This talk is based on joint work with Ce Jin, Ryan Williams, and Hongxun Wu, and a follow-up work by Xin Lyu and Weihao Zhu. -
**Wednesday, November 30, 2022**:**Aparna Gupte**(MIT). [Time: 4-5pm: 32-G575]
**Topic.**Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures**Abstract.**The Learning With Errors (LWE) problem, introduced by Regev in 2005, is the average-case problem of solving a noisy system of random linear equations over a finite ring. Regev showed that LWE is computationally hard assuming certain worst-case lattice problems are hard. Beyond its connections to lattices, LWE has been seminal in cryptography by enabling public key encryption, fully-homomorphic encryption, and lots more. In 2021, Bruna, Regev, Song and Tang introduced a continuous variant of LWE, called Continuous LWE (CLWE), and gave a quantum reduction showing that CLWE is computationally hard assuming some worst-case lattice problems are hard. Moreover, assuming the hardness of CLWE, they showed that learning Gaussian mixtures in n dimensions, with \sqrt{n} Gaussian components, is not possible in polynomial time. Several works have since shown the computational hardness of a number of other statistical learning problems, assuming the hardness of CLWE. In this talk, I will present a direct (classical) reduction from LWE to CLWE, following a series of conceptually simple transformations. This reduction from LWE to CLWE allows us to bring the well-developed machinery of LWE-based cryptography to CLWE and its applications. For example, it allows us to obtain the hardness of CLWE under the classical hardness of worst-case lattice problems. Previously, the hardness of CLWE was known only under the quantum hardness of worst-case lattice problems. Moreover, assuming the polynomial hardness of LWE, this reduction allows us to show the hardness of learning n^\epsilon Gaussian components for any \epsilon > 0, which improves on Bruna et al., who show hardness for at least \sqrt{n} Gaussian components under polynomial (quantum) worst-case hardness assumptions. Under the subexponential hardness of LWE, our reduction implies the hardness of learning roughly \log n Gaussian components. This talk is based on joint work with Neekon Vafa and Vinod Vaikuntanathan, and is available at https://arxiv.org/abs/2204.02550. -
**Wednesday, December 7, 2022**:**Ronitt Rubinfeld**(MIT). [Time: TBD: 32-G575]
**Topic.**TBD**Abstract.**TBD
## Past Talks-
**Thursday, September 8, 2022**:**Isaac Grosof**(Carnegie Melon University). [Time: 4-5pm EDT: 32-G575]
**Topic.**Optimal Scheduling in the Multiserver-job Model**Abstract.**In many computer systems, jobs contend for limited resources, and are forced to wait in queues. In modern computer systems, thousands of servers may be available to serve jobs, and jobs often require concurrent service on large numbers of servers. I will describe an abstract model that captures the behavior of these systems, called the "multiserver-job model" (MSJ). I will particularly talk about scheduling policies in the MSJ model, which determine which jobs to prioritize at each point in time. I will talk about two breakthrough results on the latency behavior of scheduling policies in the MSJ model. First, I will present the first analysis of mean latency for any scheduling policy in the MSJ model. Building off of that result, I will present the first optimality result on latency in the MSJ model. This is joint work with my coauthors Ziv Scully, Mor Harchol-Balter, and Alan Scheller-Wolf. -
**Wednesday, September 14, 2022**:**Harsha Vardhan Simhadri**(Microsoft Research India). [Time: 4-5pm EDT: 32-G575]
**Topic.**Approximate Nearest Neighbor Search algorithms for web-scale search and recommendation**Abstract.**Abstract: Web-scale search and recommendation scenarios increasingly use Approximate Nearest Neighbor Search (ANNS) algorithms to index and retrieve objects based on the similarity of their learnt representations in a geometric space. Since these scenarios often span billions or trillions of objects, efficient and scalable ANNS algorithms are critical to making these systems practical. However, most algorithms studied in literature either focus on million-scale datasets or do not have features necessary for practical indices, e.g., support for real-time updates. In this talk we discuss empirical progress on this problem. Specifically, we present DiskANN, the first external memory ANNS algorithm that can index a billion points and serve queries at interactive latencies (few milliseconds) on a commodity machine. This represents an order of magnitude more points indexed per machine than previous work. In addition, the index allows real-time updates and its in-memory performance compares well with other state of the art indices. We will conclude with some open problems in this space -- e.g., support for hybrid queries that involve a combination of similarity search and hard matches such as language or author -- and some preliminary results. Further, proving any reasonable bounds on the complexity of DiskANN or related graph-based ANNS indices remains an open problem. Joint work with Ravishankar Krishnaswamy, Sujas J Subramanya, Aditi Singh, Rohan Kadekodi, Devvrit, Shikhar Jaiswal, Magdalen Dobson, Siddharth Gollapudi, Neel Karia, Varun Sivasankaran. -
**Wednesday, September 28, 2022**:**Nicole Wein**(Rutgers University). [Time: 4-5pm EDT: 32-G575]
**Topic.**Closing the Gap Between Directed Hopsets and Shortcut Sets**Abstract.**A shortcut set is a (small) set of edges that when added to a directed graph G produces a graph with the same reachability structure as G while reducing the diameter as much as possible. A hopset is defined similarly but for approximate distances instead of reachability. One motivation for such structures is that in many settings (e.g. distributed, parallel, dynamic, streaming), computation is faster when the diameter of the graph is small. Thus, to compute, for instance, st-reachability in such a setting, it is useful to find a shortcut set as a preprocessing step, and then run an st-reachability algorithm on the resulting graph. This talk is about existential bounds for shortcut sets and hopsets: given a budget of ~O(n) added edges, what is the smallest diameter achievable? A folklore algorithm says that diameter O(n^{1/2}) is possible for any graph. A recent breakthrough of Kogan and Parter improves this to O(n^{1/3}) for shortcut sets and O(n^{2/5}) for hopsets. In recent work, we ask whether hopsets (approximate distances) are truly harder than shortcut sets (reachability). We close the gap between hopsets and shortcut sets by providing an O(n^{1/3})-diameter construction for hopsets. Based on joint work with Aaron Bernstein. -
**Wednesday, October 5, 2022**:**William Kuszmaul**(MIT). [Time: 4-5pm EDT: 32-G575]
**Topic.**Online List Labeling: Breaking the $\log^2 n$ Barrier**Abstract.**The online list-labeling problem considers a simple question: How should one store a dynamically-changing set of $n$ items in an array of $m$ slots, while maintaining the invariant that the items appear in sorted order, and while minimizing the number of items that are moved per insertion/deletion? For the linear regime, where $m = (1 + \Theta(1)) n$, an upper bound of $O(\log^2 n)$ moves per operation has been known since 1981. A lower bound of $\Omega(\log^2 n)$ is known for deterministic algorithms and for so-called \emph{smooth} algorithms, but the best general lower bound remains $\Omega(\log n)$. The central open question in the field is whether $O(\log^2 n)$ is optimal for all algorithms. We give a randomized data structure that achieves an expected cost $O(\log^{3/2} n)$ per operation. Our solution is history-independent, meaning that the state of the data structure is independent of the order in which items are inserted/deleted. And we show that, perhaps surprisingly, the bound of $O(\log^{3/2} n)$ is optimal for any history-independent data structure. Based on an upcoming FOCS '22 paper coauthored with Michael A Bender, Alex Conway, Mart\'in Farach-Colton, Hanna Koml\"os, and Nicole Wein. -
**Wednesday, October 12, 2022**:**Ainesh Bakshi**(MIT). [Time: 4-5pm EDT: 32-G575]
**Topic.**Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products**Abstract.**In this talk, we consider iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm. Here, given access to a matrix $A$ through matrix-vector products, an accuracy parameter $\epsilon$, and a target rank $k$, the goal is to find a rank-$k$ matrix $Z$ with orthonormal columns such that $\| A (I - ZZ^\top ) \|_{S_p} \leq (1+\epsilon)\min_{U^\top U = I_k }\| A (I - U U^\top)\|_{S_p}$, where $\| M \|_{S_p}$ denotes the $\ell_p$ norm of the the singular values of~$M$. For the special cases of $p=2$ (Frobenius norm) and $p = \infty$ (Spectral norm), Musco and Musco (NeurIPS 2015) obtained an algorithm based on Krylov methods that uses $\tilde{O}(k/\sqrt{\epsilon})$ matrix-vector products, improving on the na\"ive $\tilde{O}(k/\epsilon)$ dependence obtainable by the power method. Our main result is an algorithm that uses only $\tilde{O}(kp^{1/6}/\epsilon^{1/3})$ matrix-vector products, and works for \emph{all}, not necessarily constant, $p \geq 1$. For $p = 2$ our bound improves the previous $\tilde{O}(k/\epsilon^{1/2})$ bound to $\tilde{O}(k/\epsilon^{1/3})$. Since the Schatten-$p$ and Schatten-$\infty$ norms of any matrix are the same up to a $1+ \epsilon$ factor when $p \geq (\log d)/\epsilon$, our bound recovers the result of Musco and Musco for $p = \infty$. Further, we prove a matrix-vector query lower bound of $\Omega(1/\epsilon^{1/3})$ for \emph{any} fixed constant $p \geq 1$, showing that surprisingly $\tilde{\Theta}(1/\epsilon^{1/3})$ is the optimal complexity for constant~$k$. To obtain our results, we introduce several new techniques, including optimizing over multiple Krylov subspaces simultaneously, and pinching inequalities for partitioned operators. Our lower bound for $p \in [1,2]$ uses the Araki-Lieb-Thirring trace inequality, whereas for $p>2$, we appeal to a norm-compression inequality for aligned partitioned operators. Based on joint work with Ken Clarkson and David Woodruff. -
**Wednesday, October 26, 2022**:**Jessica Shi**(MIT). [Time: 4-5pm EDT: 32-G575]
**Topic.**Theoretically and Practically Efficient Parallel Nucleus Decomposition**Abstract.**This paper studies the nucleus decomposition problem, which has been shown to be useful in finding dense substructures in graphs. We present a novel parallel algorithm that is efficient both in theory and in practice. Our algorithm achieves a work complexity matching the best sequential algorithm while also having low depth (parallel running time), which significantly improves upon the only existing parallel nucleus decomposition algorithm (Sariyuce et al., PVLDB 2018). The key to the theoretical efficiency of our algorithm is the use of a theoretically-efficient parallel algorithms for clique listing and bucketing. We introduce several new practical optimizations, including a new multi-level hash table structure to store information on cliques space-efficiently and a technique for traversing this structure cache-efficiently. On a 30-core machine with two-way hyper-threading on real-world graphs, we achieve up to a 55x speedup over the state-of-the-art parallel nucleus decomposition algorithm by Sariyuce et al., and up to a 40x self-relative parallel speedup. We are able to efficiently compute larger nucleus decompositions than prior work on several million-scale graphs for the first time. -
**Tuesday, November 8, 2022**:**Jad Silbak**(Tel Aviv University). [Time: 4-5pm EDT: 32-D507 (special location!)]
**Topic.**On the Complexity of Two-Party Differential Privacy**Abstract.**In distributed differential privacy, the parties perform analysis over their joint data while preserving the privacy for both datasets. Interestingly, for a few fundamental two-party functions such as inner product and Hamming distance, the accuracy of the distributed solution lags way behind what is achievable in the client-server setting. McGregor, Mironov, Pitassi, Reingold, Talwar, and Vadhan [FOCS ’10] proved that this gap is inherent, showing upper bounds on the accuracy of (any) distributed solution for these functions. These limitations can be bypassed when settling for computational differential privacy, where the data is differentially private only in the eyes of a computationally bounded observer, using oblivious transfer. We prove that the use of public-key cryptography is necessary for bypassing the limitation of McGregor et al., showing that a non-trivial solution for the inner-product, or the Hamming distance, implies the existence of a key-agreement protocol. Our bound implies a combinatorial proof for the fact that non-Boolean inner product of independent (strong) Santha-Vazirani sources is a good condenser. We obtain our main result by showing that the inner-product of a (single, strong) SV source with a uniformly random seed is a good condenser, even when the seed and source are dependent. -
**Wednesday, November 9, 2022**:**Mikhail Makarov**(EPFL). [Time: 4-5pm EDT: 32-G575]
**Topic.**Motif Cut Sparsification**Abstract.**A motif is a frequently occurring subgraph of a given directed or undirected graph $G$. Motifs capture higher order organizational structure of $G$ beyond edge relationships, and, therefore, have found wide applications such as in graph clustering, community detection, and analysis of biological and physical networks to name a few. In these applications, the cut structure of motifs plays a crucial role as vertices are partitioned into clusters by cuts whose conductance is based on the number of instances of a particular motif, as opposed to just the number of edges, crossing the cuts. In this paper, we introduce the concept of a motif cut sparsifier. We show that one can compute in polynomial time a sparse weighted subgraph $G'$ with only $\widetilde{O}(n/\epsilon^2)$ edges such that for every cut, the weighted number of copies of $M$ crossing the cut in $G'$ is within a $1+\epsilon$ factor of the number of copies of $M$ crossing the cut in $G$, for every constant size motif $M$. Our work carefully combines the viewpoints of both graph sparsification and hypergraph sparsification. We sample edges which requires us to extend and strengthen the concept of cut sparsifiers introduced in the seminal work of and to the motif setting. The task of adapting the importance sampling framework common to efficient graph sparsification algorithms to the motif setting turns out to be nontrivial due to the fact that cut sizes in a random subgraph of $G$ depend non-linearly on the sampled edges. To overcome this, we adopt the viewpoint of hypergraph sparsification to define edge sampling probabilities which are derived from the strong connectivity values of a hypergraph whose hyperedges represent motif instances. Finally, an iterative sparsification primitive inspired by both viewpoints is used to reduce the number of edges in $G$ to nearly linear. In addition, we present a strong lower bound ruling out a similar result for sparsification with respect to induced occurrences of motifs. Joint work with Michael Kapralov, Sandeep Silwal, Christian Sohler and Jakab Tardos. -
**Monday, November 14, 2022**:**Haotian Jiang**(University of Washington). [Time: 4-5pm EDT: 32-G575]
**Topic.**Classical and New Tools for Discrepancy Theory, and What They Say About the Matrix Spencer Conjecture**Abstract.**This talk is centered around the Matrix Spencer Conjecture, which states that given symmetric n by n matrices A_1,...,A_n each with spectral norm at most 1, one can efficiently find \pm 1 signs x_1,... ,x_n such that their signed sum has spectral norm, or discrepancy, \|\sum_{i=1}^n x_i A_i\|_op = O(\sqrt{n}). Randomly choosing the signs x_i will result in discrepancy O(\sqrt{n \log n}), and the conjecture says that the extraneous \sqrt{\log n} factor for random signing can be removed, generalizing one of the most foundational results in discrepancy theory by [Spencer, 1985] . In this talk, I will present a proof of this conjecture when the matrices A_i have rank at most n/\log^3 n. This resolves the Matrix Spencer Conjecture up to poly-logarithmic rank, and implies a nearly-optimal qubit lower bound for quantum random access codes encoding n classical bits with advantage >> 1/\sqrt{n}. I will introduce the two central tools behind the proof of this result: the classical tool of partial coloring method for convex sets in [Gluskin, 1989] and [Giannopoulos, 1997] which is widely used in discrepancy theory, and the new tool of sharp matrix concentration inequalities in [Bandeira, Boedihardjo, van Handel, 2022] for random matrices with correlated Gaussian entries. This talk is based on a joint work with Nikhil Bansal and Raghu Meka, available at https://arxiv.org/abs/2208.11286.
## Theory CalendarAdd this calendar to yours to receive details about upcoming events in the Theory group
## Earlier Editions of A&C Seminar |