Datacenters host a wide range of today's low-latency applications. To meet their strict latency requirements at scale, datacenter networks are designed as topologies that can provide a large number of parallel paths between each pair of hosts. The recent trend towards simple datacenter network fabric strips most network functionality, including load balancing among these paths, out of the network core and pushes it to the edge. This slows reaction to microbursts, the main culprit of packet loss -- and consequently performance degradation -- in datacenters.
Engineering complex systems become even more challenging in the presence of strategic agents whose behavior is guided by their own incentives. Examples include important applications such as scheduling jobs in the cloud, the design of road networks, spectrum auctions, online markets, and learning with crowdsourced data. How does one design a system so that the designer's objective is achieved robustly despite the existence of strategic behavior?
Machine learning is fundamentally changing how software is developed. Rather than program behavior directly, many developers now curate training data and engineer features, but the process is slow, laborious, and expensive. In this talk I will describe two multi-year projects to study how high-level knowledge can be programmed more directly into statistical machine learning models. The resulting prototypes are used in dozens of major technology companies and research labs, and in collaboration with government agencies like the U.S.
Realizing the vision of the fully connected world — the Internet of Things (IoT) — requires advances in multiple areas. Energy harvesting and fog/edge computing can bring everyday objects to life in complementary ways: by using the environment to make the IoT nodes smaller and lighter, and by bringing advanced computing capabilities closer to the nodes to make them more adaptive and intelligent.
Protecting sensitive user data and proprietary programs are fundamental and important challenges. For instance, when users outsource their private data to the cloud, they risk leakage of the data in the event of a data breach; encrypting their data is not a workable solution since it impedes the cloud provider’s ability to offer user-specific services. When companies execute proprietary programs on third-party cloud providers, they similarly face the risk of leaking trade secrets.
We are at an exciting point in the evolution of memory technology. Device manufacturers have created a new non-volatile memory (NVM) technology that can serve as both system memory and storage. NVM supports fast reads and writes similar to volatile memory, but all writes to it are persistent like a solid-state disk. The advent of NVM invalidates decades of design decisions that are deeply embedded in today's database management systems (DBMSs).
Failures in medical devices, banking software, and transportation systems have led to both significant fiscal costs and even loss of life. Researchers have developed sophisticated methods to monitor and understand many of the complex system misbehaviors behind these bugs, but their computational costs (often an order of magnitude or more) prohibit their use in production, leading to an ecosystem of critical software with little guaranteed protection, and no method of reconciling misbehaviors.
Typical analysis of learning algorithms considers their outcome in isolation from the effects that they may have on the process that generates the data or the entity that is interested in learning. However, current technological trends mean that people and organizations increasingly interact with learning systems, making it necessary to consider these effects, which fundamentally change the nature of learning and the challenges involved.
A great deal of attention has been applied to studying new and better ways to perform learning tasks involving static finite vectors. Indeed, over the past century the fields of statistics and machine learning have amassed a vast understanding of various learning tasks like clustering, classification, and regression using simple real valued vectors. However, we do not live in a world of simple objects.
Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals and assisted living facilities, among others. In this talk I will discuss my work on endowing hospitals with ambient intelligence, using computer vision-based human activity understanding in the hospital environment to assist clinicians with complex care.
The recent proliferation of acoustic devices, ranging from voice assistants to wearable health monitors, is leading to a sensing ecosystem around us – referred to as the Internet of Acoustic Things or IoAT. My research focuses on developing hardware-software building blocks that enable new capabilities for this emerging future. In this talk, I will sample some of my projects. For instance, (1) I will demonstrate carefully designed sounds that are completely inaudible to humans but recordable by all microphones.
The increasingly interconnected cyber-ecosystem invites cybercriminals to advance their ill-intentioned missions by launching cyber-attacks. From high-profile data breaches with impact on billions of users to hacks into political organizations that undermine the pillars of modern democracies, from infiltration of mission-critical infrastructures to banking trojans and ransomware campaigns, cybercrime continues to find its way to our sensitive data, finances, and digital identity.
Hardware plays a critical role in today's security landscape. Every protocol with security or privacy guarantees inevitably includes some hardware in its trusted computing base. The increasing number of vulnerability disclosures calls for a more rigorous approach to secure hardware designs. In this talk, I will present several cryptographic primitives to enhance the security of hardware.
This talk will provide an overview of techniques developed in my group to enable robots to react rapidly in the face of changes in the environment when manipulating objects. Learning is guided by observing humans’ elaborate manipulatory skills. I will stress how important it is to model the various ways with which humans perform the same task. This multiplicity of solutions is the key to generate robust and flexible robotic controllers capable of adapting their strategies in the face of unexpected changes in the environment.
In the past decade there has been a significant increase in the collection of personal information and communication metadata (with whom users communicate, when, how often) by governments, Internet providers, companies, and universities. While there are many ongoing efforts to secure users' communications, namely end-to-end encryption messaging apps and E-mail services, safeguarding metadata remains elusive.
The reconstruction of 3D scenes and their appearance from imagery is one of the longest-standing problems in computer vision. Originally developed to support robotics and artificial intelligence applications, it has found some of its most widespread use in support of interactive 3D scene visualization. One of the keys to this success has been the melding of 3D geometric and photometric reconstruction with a heavy re-use of the original imagery, which produces more realistic rendering than a pure 3D model-driven approach.