Realizing the vision of the fully connected world — the Internet of Things (IoT) — requires advances in multiple areas. Energy harvesting and fog/edge computing can bring everyday objects to life in complementary ways: by using the environment to make the IoT nodes smaller and lighter, and by bringing advanced computing capabilities closer to the nodes to make them more adaptive and intelligent.
Machine learning is fundamentally changing how software is developed. Rather than program behavior directly, many developers now curate training data and engineer features, but the process is slow, laborious, and expensive. In this talk I will describe two multi-year projects to study how high-level knowledge can be programmed more directly into statistical machine learning models. The resulting prototypes are used in dozens of major technology companies and research labs, and in collaboration with government agencies like the U.S.
Engineering complex systems become even more challenging in the presence of strategic agents whose behavior is guided by their own incentives. Examples include important applications such as scheduling jobs in the cloud, the design of road networks, spectrum auctions, online markets, and learning with crowdsourced data. How does one design a system so that the designer's objective is achieved robustly despite the existence of strategic behavior?
Datacenters host a wide range of today's low-latency applications. To meet their strict latency requirements at scale, datacenter networks are designed as topologies that can provide a large number of parallel paths between each pair of hosts. The recent trend towards simple datacenter network fabric strips most network functionality, including load balancing among these paths, out of the network core and pushes it to the edge. This slows reaction to microbursts, the main culprit of packet loss -- and consequently performance degradation -- in datacenters.
Intelligent systems that are capable of understanding natural languages can have many applications from healthcare to business to law. One of the ways we can formulate natural language understanding is by treating it as a task of mapping natural language text to its meaning representation: entities and relations anchored to the world. Knowledge bases (KBs) can facilitate natural language understanding by mapping words to their meaning representations, for example nouns to entities and verbs to relations.
Recent developments in deep representation-based methods for many computer vision problems have knocked down many research themes pursued over the last four decades. In this talk, I will discuss methods based on deep representations, adversarial learning and domain adaptation for designing robust computer vision systems with applications in unconstrained face and action verification and recognition, expression recognition, subject clustering and attribute extraction.
Cryptography, originally the art of facilitating secret communication, has now evolved to enable a variety of secure tasks. Some core challenges that modern cryptography addresses are: Can we prevent adversaries from tampering with encrypted communication? Can we verify that computation is performed correctly while preserving the privacy of data on which computation occurs? Can we enable mutually distrusting participants to jointly compute on distributed private data?
Deep learning is one of most popular learning techniques used in natural language processing (NLP). A central question in deep learning for NLP is how to design a neural network that can fully utilize the information from training data and make accurate predictions. A key to solve this problem is to design a better network architecture. In this talk, I will present two examples from my work on how structural information from natural language helps design better neural network models.
Charles Glaser, Editorial Director for Springer, will discuss a variety of publishing opportunities with Springer, as well as the Springer Nature business model and how it enables researchers to maximize global dissemination of their work.
In recent years, progress in computer vision and machine learning has been profoundly enabled by deep neural networks. However, despite the superior performance of these networks, it remains challenging to understand their inner workings and explain their output predictions. My research has pioneered several novel approaches for elucidating the interpretable representations of networks emerged from solving various vision tasks. In this talk, I will first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes.
The popularity of wearable and mobile devices, including smartphones and smartwatches, has generated an explosion of detailed behavioral data. These massive digital traces provides us with an unparalleled opportunity to realize new types of scientific approaches that provide novel insights about our lives, health, and happiness.
Datathons are a new type of live-action competition for STEM students. They are analogous to "Hackathons" for software engineers, but instead of building apps, contestants use real-world data to develop and substantiate solutions to a socially impactful problem. If you are curious to see what a Datathon looks like, we encourage you to view this brief clip from our past Dublin Datathon!
Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration (inference part) since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors.
In this talk I will discuss the problem of trying to learn the requirements and preferences of economic agents by observing the outcomes of an allocation mechanism whose rules you also don’t initially know. As an example, consider observing web pages where the agents are advertisers and the winners are those whose ads show up on the given page. We know these ads are placed based on bids and other constraints given to some auction mechanism, but we do not get to see these bids and constraints.
Over the last decade, the development of fast and reliable motion planning algorithms has deeply influenced many domains in robotics, such as industrial automation and autonomous exploration. Motion planning has also contributed to great advances in an array of unlikely fields, including graphics animation and computational structural biology.
Physical sensors (thermal, light, motion, etc.) are becoming ubiquitous and offer important benefits to society. However, allowing sensors into our private spaces has resulted in considerable privacy concerns. Differential privacy has been developed to help alleviate these privacy concerns. In this talk, we'll develop and define a framework for releasing physical data that preserves both utility and provides privacy. Our notion of closeness of physical data will be defined via the Earth Mover Distance and we'll discuss the implications of this choice.
Automation, driven by technological progress, has been increasing inexorably for the past several decades. Two schools of economic thinking have for many years been engaged in a debate about the potential effects of automation on jobs: will new technology spawn mass unemployment, as the robots take jobs away from humans? Or will the jobs robots take over create demand for new human jobs?
Classical consensus protocols have been widely deployed by companies such as Google and Facebook to replicate their computing infrastructure--- although traditional deployments are usually in controlled and small-scale environments. The rise of cryptocurrencies has stimulated excitement in large-scale deployments of distributed consensus, e.g., across thousands of nodes and hundreds of (mutually distrustful) organizations. Thus the race is on for the community to create and implement large-scale consensus protocols that are ever more robust and ever more scalable.
Canceled TechConnect 2018 Spring - cancelled
inDuke TechConnect, an event hosted by the Duke University Department of Computer Science, Pratt School of Engineering, and Career Center, brings students and employers together for networking and education.