Thomas Pasquier, Xueyuan Han, Thomas Moyer, Adam Bates, Olivier Hermant, David Eyers, Jean Bacon, and Margo Seltzer. Forthcoming. “Runtime Analysis of Whole-System Provenance .” In Conference on Computer and Communications Security. Toronto, Canada: ACM.Abstract

Identifying the root cause and impact of a system intrusion remains a foundational challenge in computer security. Digital provenance provides a detailed history of the flow of information within a computing system, connecting suspicious events to their root causes. Although existing provenance-based auditing techniques provide value in forensic analysis, they assume that such analysis takes place only retrospectively. Such post-hoc analysis is insufficient for realtime security applications; moreover, even for forensic tasks, prior provenance collection systems exhibited poor performance and scalability, jeopardizing the timeliness of query responses.

We present CamQuery, which provides inline, realtime provenance analysis, making it suitable for implementing security applications. CamQuery is a Linux Security Module that offers support for both userspace and in-kernel execution of analysis applications. We demonstrate the applicability of CamQuery to a variety of runtime security applications including data loss prevention, intrusion detection, and regulatory compliance. In evaluation, we demonstrate that CamQuery reduces the latency of realtime query mechanisms by at least 89%, while imposing minimal overheads on system execution. CamQuery thus enables the further deployment of provenance-based technologies to address central challenges in computer security. 

Xueyuan Han, Thomas Pasquier, and Margo Seltzer. 7/13/2018. “Provenance-based Intrusion Detection: Opportunities and Challenges.” In Workshop on Theory and Practice of Provenance (TaPP'18). London: USENIX. Publisher's VersionAbstract
Attackers constantly evade intrusion detection systems as new attack vectors sidestep their defense mechanisms. Provenance provides a detailed, structured history of the interactions of digital objects within a system. It is ideal for intrusion detection as it offers a holistic, attack-vector-agnostic view of system execution. We believe that graph analysis on provenance graphs fundamentally strengthens detection robustness. Towards this goal, we discuss opportunities and challenges associated with provenance-based intrusion detection and offer our insights based on our past experience.
Thomas Pasquier, Matthew K. Lau, Xueyuan Han, Elizabeth Fong, Barbara S. Lerner, Emery Boose, Mercè Crosas, Aaron Ellison, and Margo Seltzer. 7/2018. “Sharing and Preserving Computational Analyses for Posterity with encapsulator.” Computing in Science and Engineering (CiSE). Publisher's VersionAbstract

Open data and open-source software may be part of the solution to sciences reproducibility crisis, but they are insufficient to guarantee reproducibility. Requiring minimal end-user expertise, encapsulator creates a “time capsule” with reproducible code in a self-contained computational environment. encapsulator provides end-users with a fully-featured desktop environment for reproducible research. 

Xueyuan Han. 4/23/2018. “Using Provenance for Security and Interpretability.” EuroSys Doctoral Workshop (EuroDW'18). Publisher's VersionAbstract
System security is somewhat stymied because it is difficult, if not impossible, to design 
system defenses that address the full complexity of a system's interaction. Interestingly, this 
problem has parallels in understanding how machine learning (ML) algorithms make 
predictions. Both of these problems require a structured, comprehensive understanding of 
what a system/model is doing. My dissertation addresses these seemingly disparate 
problems by exploiting data provenance, which provides just such a solution. I exploit 
provenance both to design intrusion detection systems and to explain how ML algorithms 
arrive at their predictions.
Muhammad Ali Gulzar, Matteo Interlandi, Xueyuan Han, Mingda Li, Tyson Condie, and Miryung Kim. 9/25/2017. “Automated Debugging in Data-Intensive Scalable Computing.” In ACM Symposium on Cloud Computing 9/25/2017. Santa Clara, California. Publisher's VersionAbstract

Developing Big Data Analytics workloads often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions made about data. When errors (e.g., program crash, outlier results, etc.) arise, developers are often interested in identifying a subset of the input data that is able to reproduce the problem. BIGSIFT is a new faulty data localization approach that combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. BIGSIFT redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BIGSIFT improves the accuracy of fault localizability by several orders-of-magnitude (∼103 to 107×) compared to Titian data provenance, and improves performance by up to 66× compared to Delta Debugging, an automated fault-isolation technique. For each faulty output, BIGSIFT is able to localize fault-inducing data within 62% of the original job running time. 

Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer, and Jean Bacon. 9/25/2017. “Practical Whole-System Provenance Capture.” In ACM Symposium on Cloud Computing 9/25/2017. Santa Clara, California. Publisher's VersionAbstract

Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system’s behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications. 

Xueyuan Han, Thomas Pasquier, Tanvi Ranjan, Mark Goldstein, and Margo Seltzer. 2017. “FRAPpuccino: Fault-detection through Runtime Analysis of Provenance.” In 9th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud '17). San Clara, CA: USENIX. Publisher's VersionAbstract
We present FRAPpuccino (or FRAP), a provenance-based fault detection mechanism for Platform as a Service (PaaS) users, who run many instances of an application on a large cluster of machines. FRAP models, records, and analyzes the behavior of an application and its impact on the system as a directed acyclic provenance graph. It assumes that most instances behave normally and uses their behavior to construct a model of legitimate behavior. Given a model of legitimate behavior, FRAP uses a dynamic sliding window algorithm to compare a new instance’s execution to that of the model. Any instance that does not conform to the model is identified as an anomaly. We present the FRAP prototype and experimental results showing that it can accurately detect application anomalies.
Muhammad Ali Gulzar, Xueyuan Han, Matteo Interlandi, Shaghayegh Mardani, Sai Deep Tetali, Tyson Condie, Todd Millstein, and Miryung Kim. 2016. “Interactive Debugging for Big Data Analytics.” In The 8th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud '16). Denver, CO: USENIX. Publisher's VersionAbstract
An abundance of data in many disciplines has accelerated the adoption of distributed technologies such as Hadoop and Spark, which provide simple programming semantics and an active ecosystem. However, the current cloud computing model lacks the kinds of expressive and interactive debugging features found in traditional desktop computing. We seek to address these challenges with the development of BIGDEBUG, a framework providing interactive debugging primitives and tool-assisted fault localization services for big data analytics. We showcase the data provenance and optimized incremental computation features to effectively and efficiently support interactive debugging, and investigate new research directions on how to automatically pinpoint and repair the root cause of errors in large-scale distributed data processing.