In this talk, a new type of logic characterization vehicle (LCV) that simultaneously optimizes design, test, and diagnosis for yield learning is described. The Carnegie-Mellon LCV is a test chip composed of logic standard cells that uses constant-testability theory and logic/layout diversity to create a parameterized design that exhibits both front- and back-end demographics of a product-like, customer design. Analysis of various CMU-LCV designs (one of which has >4M gates) demonstrates that design time and density, test and diagnosis can all be simultaneously optimized.
This talk will focus on how my undergraduate experiences in computer science and literature influenced my career path and detail the technical and production processes of computer animation and app production through my personal work experiences at Pixar, Disney, and as an indie app developer.
Suppose a collection of indivisible goods are arranged in a line, and we wish to allocate these items to agents so that each agent receives a connected bundle (an interval). This makes sense when the items have a spatial or temporal structure, for example when allocating time slots, and agents are interested in receiving a contiguous chunk of time. We study the computation of Pareto-optimal (PO) allocations with connected bundles, and find some surprising hardness results. We further study the existence of allocations that are envy-free up to one good (EF1).
The cost of genome sequencing has decreased by over 100,000 fold over the last decade. This genomic revolution is now enabling us to measure how our genomes vary at millions of positions across millions of individuals opening up the possibility of answering fundamental questions in human genetics. I will describe our work at the intersection of statistics, computer science and genomics aimed at leveraging these large-scale genomic datasets to answer questions such as how human populations evolved and what are the genes underlying diseases.
Single-particle cryo-electron microscopy (cryo-EM) is an innovative technology for elucidating structures of biological molecules at atomic-scale resolution. In a cryo-EM experiment, tomographic projections of a molecule, taken at unknown viewing directions, are embedded in highly noisy images at unknown locations. The cryo-EM problem is to estimate the 3-D structure of a molecule from these noisy images. Inspired by cryo-EM, the talk will focus on two estimation problems: multi-reference alignment and blind deconvolution.
Computing is cheap and getting cheaper while spectrum is always scarce. Massive MIMO technology employs a large number of antennas and massive computational power at the base station so that the latter can serve many users concurrently, reusing the same spectrum spatially. Despite its theoretical appeal, massive MIMO poses nontrivial scalability challenges to the design and implementation of both the base station and network systems.
Our group is interested in the development of new agents that can overcome resistance in pathogenic bacteria, mycobacterial and viruses. Central to our work is understanding the structural and mechanistic basis of resistance and how those principles can be exploited in drug design efforts.
The sensitivity metric in differential privacy, which is informally defined as the largest marginal change in output between neighboring databases, is of substantial significance in determining the accuracy of private data analyses. Techniques for improving accuracy when the average sensitivity is much smaller than the worst-case sensitivity have been developed within the differential privacy literature, including tools such as smooth sensitivity, Sample-and-Aggregate, Propose-Test-Release, and Lipschitz extensions. In this work, we provide a new and general Sensitivity-Preproc
In response to the challenges of modern data analysis, two algorithmic paradigms have emerged as common solutions to many problems: streaming algorithms and distributed algorithms. In this talk, we will discuss challenges and algorithms in these two settings. In streaming, we focus on the problem of finding heavy hitters, a fundamental problem on its own and a major building block in many other algorithms. In the distributed setting, we focus on problems in distributed machine learning and show both efficient algorithms and lower bounds on the minimum resources required.
Submodular optimization problems arise in many domains in Computer Science, Mathematics, and Economics. Due to their unusual mix of generality and tractability, they have played a central role in discrete optimization and they have witnessed wide adoption in applications in data mining, machine learning, and computer vision. In this talk, we explore fundamental algorithmic challenges in submodular optimization, ranging from understanding the approximability of central problems to designing very efficient and parallel algorithms.
Neural Data Structures and their Interrelationship with Algorithms: Insights from how the Brain Links Visual and Auditory Space
Controlled experiments, or A/B tests, are the gold standard for optimizing websites. From Amazon's checkout flow to Google's search results, A/B tests can help companies improve workflows to download software, sign-up for subscriptions, click on content, or make a purchase. But, controlled experiments have roots far beyond optimizing websites for conversion. In its simplest form, controlled experiments help establish a causal relationship of a change for a target audience.
Latent variable models are widely used in applications ranging from natural language processing to recommender systems. Exact inference using maximum likelihood for these models is generally NP-hard, and computationally prohibitive on big and/or high-dimensional data. This has motivated the development of approximate inference methods that balance between computational complexity and statistical efficiency. Understanding the computational and statistical tradeoff is important for analyzing approximate inference approaches as well as designing new ones.
Today, most application developers write code without much regard for how quickly it will run. Moreover, once the code is written, it is rare for it to be reengineered to run faster. But two technology trends of historic proportions are instigating a resurgence in software performance engineering, the art of making code run fast. The first is the emergence of cloud computing, where the economics of renting computation, as opposed to buying it, heightens the utility of application speed.
Policy gradient methods for reinforcement learning and continuous control are popular in practice and have helped recent advances in robotic navigation and in game playing. However, they lack theoretical guarantees even for the simplest case of linear dynamics and a quadratic cost, the Linear Quadratic Regulator (LQR) problem. A difficulty is that unlike the classical approaches to LQR, these methods must solve a nonconvex optimization problem to find the optimal control policy.
We study the problem of fairly allocating a set of indivisible items among $n$ agents. Typically, the literature has focused on one-shot algorithms. In this talk we depart from this paradigm and allow items to arrive online. When an item arrives we must immediately and irrevocably allocate it to an agent. A paradigmatic example is that of food banks: food donations arrive, and must be delivered to nonprofit organizations such as food pantries and soup kitchens.
When you think of McDonald’s, the Golden Arches, a Big Mac, or a favorite Happy Meal memory likely spring to mind; but you might not immediately think of the McDonald’s App. The US Digital team is working to change that. McDonald’s is transforming their business, driven by digital. From revolutionizing ordering to personalizing offers, the US Digital team is using data and technology to improve the customer experience and drive sales growth. Come listen to Hashim Amin, Head of Digital – US Market, speak about McDonald’s digital journey.
The Graph Matching problem is a robust version of the Graph Isomorphism problem: given two not-necessarily-isomorphic graphs, the goal is to find a permutation of the vertices which maximizes the number of common edges. We study a popular average-case variant; we deviate from the common heuristic strategy and give the first quasi-polynomial time algorithm, where previously only sub-exponential time algorithms were known.