Facebook Graduate Fellowship
Pervasive Monitoring, Diagnostics, and Analytics of Distributed Systems through Dynamic Causal Tracing
I am a final year PhD candidate in the Department of Computer Science at Brown University.
I expect to graduate in May 2018. I will be applying for academic and industrial research positions in the USA and New Zealand. Contact me!
At Brown, I am advised by Professor Rodrigo Fonseca. I am also currently collaborating with researchers in Facebook's tracing and performance groups.
My research focuses on understanding and improving performance in shared distributed systems -- systems in which many user requests run concurrently, and each request many traverse multiple component, machine, and application boundaries. My research includes topics such as end-to-end request tracing, resource management, distributed scheduling, and per-tenant performance guarantees.
A few problems I am thinking about:
This paper presents Canopy, Facebook’s end-to-end performance tracing infrastructure. Canopy records causally related performance data across the end-to-end execution path of requests, including from browsers, mobile applications, and backend services. Canopy processes traces in near real-time, derives user-specified features, and outputs to performance datasets that aggregate across billions ofrequests. Using Canopy, Facebook engineers can query and analyze performance data in real-time. Canopy addresses three challenges we have encountered in scaling performance analysis: supporting the range of execution and performance models used by different components of the Facebook stack; supporting interactive ad-hoc analysis of performance data; and enabling deep customization by users, from sampling traces to extracting and visualizing features. Canopy currently records and processes over1 billion traces per day. We discuss how Canopy has evolved to apply to a wide range of scenarios, and present case studies of its use in solving various performance challenges.
In many important cloud services, different tenants execute their requests in the thread pool of the same process, requiring fair sharing of resources. However, using fair queue schedulers to provide fairness in this context is difficult because of high execution concurrency, and because request costs are unknown and have high variance. Using fair schedulers like WFQ and WF²Q in such settings leads to bursty schedules, where large requests block small ones for long periods of time. In this paper, we propose Two-Dimensional Fair Queuing (2DFQ), which spreads requests of different costs across di erent threads and minimizes the impact of tenants with unpredictable requests. In evaluation on production workloads from Azure Storage, a large-scale cloud system at Microsoft, we show that 2DFQ reduces the burstiness of service by 1-2 orders of magnitude. On workloads where many large requests compete with small ones, 2DFQ improves 99th percentile latencies by up to 2 orders of magnitude.
Workflow-centric tracing captures the workflow of causally-related events (e.g., work done to process a request) within and among the components of a distributed system. As distributed systems grow in scale and complexity, such tracing is becoming a critical tool for understanding distributed system behavior. Yet, there is a fundamental lack of clarity about how such infrastructures should be designed to provide maximum benefit for important management tasks, such as resource accounting and diagnosis. Without research into this important issue, there is a danger that workflow-centric tracing will not reach its full potential. To help, this paper distills the design space of workflow-centric tracing and describes key design choices that can help or hinder a tracing infrastructure’s utility for important tasks. Our design space and options for them are based on our experiences developing several previous workflow-tracing infrastructures.
Monitoring and troubleshooting distributed systems is notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today – logs, counters, and metrics – have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This paper presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. We have implemented a prototype of Pivot Tracing for Java-based systems and evaluate it on a heterogeneous Hadoop cluster comprising HDFS, HBase, MapReduce, and YARN. We show that Pivot Tracing can effectively identify a diverse range of root causes such as software bugs, misconfiguration, and limping hardware. We show that Pivot Tracing is dynamic, extensible, and enables cross-tier analysis between inter-operating applications, with low execution overhead.
In distributed systems shared by multiple tenants, effective resource management is an important pre-requisite to providing quality of service guarantees. Many systems deployed today lack performance isolation and experience contention, slowdown, and even outages caused by aggressive workloads or by improperly throttled maintenance tasks such as data replication. In this work we present Retro, a resource management framework for shared distributed systems. Retro monitors per-tenant resource usage both within and across distributed systems, and exposes this information to centralized resource management policies through a high-level API. A policy can shape the resources consumed by a tenant using Retro’s control points, which enforce sharing and rate-limiting decisions. We demonstrate Retro through three policies providing bottleneck resource fairness, dominant resource fairness, and latency guarantees to high-priority tenants, and evaluate the system across five distributed systems: HBase, Yarn, MapReduce, HDFS, and Zookeeper. Our evaluation shows that Retro has low overhead, and achieves the policies’ goals, accurately detecting contended resources, throttling tenants responsible for slowdown and overload, and fairly distributing the remaining cluster capacity.
As our systems move to more concurrent and distributed execution patterns, the tools and abstractions we have to understand, monitor, schedule, and enforce their behavior become progressively less effective or adequate. We argue that systems should be built with causal propagation of generic metadata as a first class primitive, to serve as the narrow waist upon which many debugging and troubleshooting tools could be built, in an analogy to the role of the IP layer in networking
In distributed services shared by multiple tenants, managing resource allocation is an important pre-requisite to providing dependability and quality of service guarantees. Many systems deployed today experience contention, slowdown, and even system outages due to aggressive tenants and a lack of resource management. Improperly throttled background tasks, such as data replication, can overwhelm a system; conversely, high-priority background tasks, such as heartbeats, can be subject to resource starvation. In this paper, we outline ve design principles necessary for effective and efficient resource management policies that could provide guaranteed performance, fairness, or isolation. We present Retro, a resource instrumentation framework that is guided by these principles. Retro instruments all system resources and exposes detailed, real-time statistics of pertenant resource consumption, and could serve as a base for the implementation of such policies.
This document summarizes information about end-to-end tracing for 26 companies. The information was gathered from documents shared to the Distributed Tracing Workgroup and through in-person conversations at tracing workshops.
Pivot Tracing is a monitoring framework for distributed systems that can seamlessly correlate statistics across applications, components, and machines at runtime without needing to change or redeploy system code. Users can define and install monitoring queries on-the-fly to collect arbitrary statistics from one point in the system while being able to select, filter, and group by events meaningful at other points in the system. Pivot Tracing does not correlate cross-component events using expensive global aggregations, nor does it perform offline analysis. Instead, Pivot Tracing directly correlates events as they happen by piggybacking metadata alongside requests as they execute—even across component and machine boundaries. This gives Pivot Tracing a very low runtime overhead—less than 1% for many cross-component monitoring queries.
End-to-end tracing has emerged recently as a valuable tool to improve the dependability of distributed systems by performing dynamic verification and diagnosing correctness and performance problems. End-to-end traces are commonly represented as richly annotated directed acyclic graphs, with events as nodes and their causal dependencies as edges. Being able to automatically compare these graphs at scale is a key primitive for tasks such as clustering, classification, and anomaly detection. In this paper we explore recent developments in the theory of graph kernels, and investigate the feasibility of using a family of kernels based on the Weisfeiler-Lehman graph isomorphism test as an efficient and robust graph comparison primitive. We find that graph kernels provide a good formulation of the execution graph comparison problem, and present preliminary but encouraging results on their ability to distinguish high-level differences between execution graphs.
Brown University Department of Computer Science
Facebook, New York
Microsoft Research, Redmond
Mathematics & Computer Science
MMathComp, 1st Class
Oxford University, UK
Pervasive Monitoring, Diagnostics, and Analytics of Distributed Systems through Dynamic Causal Tracing
Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems, SOSP '15
Oxford University, Hertford College