Protecting sensitive user data and proprietary programs are fundamental and important challenges. For instance, when users outsource their private data to the cloud, they risk leakage of the data in the event of a data breach; encrypting their data is not a workable solution since it impedes the cloud provider’s ability to offer user-specific services. When companies execute proprietary programs on third-party cloud providers, they similarly face the risk of leaking trade secrets.
We are at an exciting point in the evolution of memory technology. Device manufacturers have created a new non-volatile memory (NVM) technology that can serve as both system memory and storage. NVM supports fast reads and writes similar to volatile memory, but all writes to it are persistent like a solid-state disk. The advent of NVM invalidates decades of design decisions that are deeply embedded in today's database management systems (DBMSs).
Failures in medical devices, banking software, and transportation systems have led to both significant fiscal costs and even loss of life. Researchers have developed sophisticated methods to monitor and understand many of the complex system misbehaviors behind these bugs, but their computational costs (often an order of magnitude or more) prohibit their use in production, leading to an ecosystem of critical software with little guaranteed protection, and no method of reconciling misbehaviors.
Typical analysis of learning algorithms considers their outcome in isolation from the effects that they may have on the process that generates the data or the entity that is interested in learning. However, current technological trends mean that people and organizations increasingly interact with learning systems, making it necessary to consider these effects, which fundamentally change the nature of learning and the challenges involved.
A great deal of attention has been applied to studying new and better ways to perform learning tasks involving static finite vectors. Indeed, over the past century the fields of statistics and machine learning have amassed a vast understanding of various learning tasks like clustering, classification, and regression using simple real valued vectors. However, we do not live in a world of simple objects.
Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals and assisted living facilities, among others. In this talk I will discuss my work on endowing hospitals with ambient intelligence, using computer vision-based human activity understanding in the hospital environment to assist clinicians with complex care.
The recent proliferation of acoustic devices, ranging from voice assistants to wearable health monitors, is leading to a sensing ecosystem around us – referred to as the Internet of Acoustic Things or IoAT. My research focuses on developing hardware-software building blocks that enable new capabilities for this emerging future. In this talk, I will sample some of my projects. For instance, (1) I will demonstrate carefully designed sounds that are completely inaudible to humans but recordable by all microphones.
The increasingly interconnected cyber-ecosystem invites cybercriminals to advance their ill-intentioned missions by launching cyber-attacks. From high-profile data breaches with impact on billions of users to hacks into political organizations that undermine the pillars of modern democracies, from infiltration of mission-critical infrastructures to banking trojans and ransomware campaigns, cybercrime continues to find its way to our sensitive data, finances, and digital identity.
Internet-of-Things (IoT) devices serve as gateways connecting the digital world to the physical world. Although we already have powerful tools to understand various data in the digital world, IoT devices are currently insufficient for capturing and processing large amounts of physical world data. In this talk, I will present two systems we built to address this problem. I will first present a low-power backscatter radio system designed for IoT. Our radio consumes 100~10x less power compared to existing WiFi radios.
Novel Computational Protein Design Algorithms with Sparse Residue Interaction Graphs, Ensembles, and Mathematical Guarantees, and their Application to Antibody Design
Computational structure-based protein design seeks to harness the incredible biological power of proteins by designing proteins with new structures and even new function. In this dissertation, we present new algorithms to more efficiently search over two models of protein design: design with sparse residue interaction graphs, and design with conformational ensembles. These algorithms build upon existing provable algorithms: they retain all mathematical guarantees of preceding provable methods while providing both efficiency gains and novel theoretical results.
Hardware plays a critical role in today's security landscape. Every protocol with security or privacy guarantees inevitably includes some hardware in its trusted computing base. The increasing number of vulnerability disclosures calls for a more rigorous approach to secure hardware designs. In this talk, I will present several cryptographic primitives to enhance the security of hardware.
To protect the billions of computers running countless programs, security researchers have pursued automated vulnerability detection and remediation techniques, attempting to scale such analyses beyond the limitations of human hackers. However, although techniques will mitigate, or even eliminate the bottleneck that human effort represented in these areas, the human bottleneck (and human fallibility) remains in the higher-level strategy of what to do with automatically identified vulnerabilities, automatically created exploits, and automatically generated patches.
One vision of Internet of Things (IoT) is to provide seamless connectivity and sensing. IoT devices are deployed densely in space to enable ubiquitous intelligence; and are connected wirelessly to support high-throughput data exchange. These devices are also becoming increasingly mobile, such as IoT-powered inventory management, personal robots and autonomous cars. However, the current network stack lacks primitives to support the desired connectivity, management and services of densely deployed mobile IoT devices.
This talk will provide an overview of techniques developed in my group to enable robots to react rapidly in the face of changes in the environment when manipulating objects. Learning is guided by observing humans’ elaborate manipulatory skills. I will stress how important it is to model the various ways with which humans perform the same task. This multiplicity of solutions is the key to generate robust and flexible robotic controllers capable of adapting their strategies in the face of unexpected changes in the environment.
In the past decade there has been a significant increase in the collection of personal information and communication metadata (with whom users communicate, when, how often) by governments, Internet providers, companies, and universities. While there are many ongoing efforts to secure users' communications, namely end-to-end encryption messaging apps and E-mail services, safeguarding metadata remains elusive.
Following the progress in computing and machine learning algorithms as well as the emergence of big data, artificial intelligence (AI) has become a reality impacting every fabric of our algorithmic society. Despite the explosive growth of machine learning, the common misconception that machines operate on zeros and ones, therefore they should be objective, still holds. But then, why does Google Translate convert these Turkish sentences with gender-neutral pronouns, “O bir doktor. O bir hemşire”, to these English sentences, “He is a doctor. She is a nurse”?
TCP is widely used for client-server communication in modern data centers. But TCP packet handling is notoriously CPU intensive, accounting for an increasing fraction of data center processing time. Techniques such as TCP segment offload, kernel bypass, and RDMA are of limited benefit for the typical small, frequent RPCs. These techniques can also compromise protocol agility, resource isolation, overall system reliability, and complicate multi-tenancy.
The reconstruction of 3D scenes and their appearance from imagery is one of the longest-standing problems in computer vision. Originally developed to support robotics and artificial intelligence applications, it has found some of its most widespread use in support of interactive 3D scene visualization. One of the keys to this success has been the melding of 3D geometric and photometric reconstruction with a heavy re-use of the original imagery, which produces more realistic rendering than a pure 3D model-driven approach.