I am a Ph.D. student in Education Policy and Program Evaluation at Harvard University. My primary research interests involve identifying and evaluating interventions that improve student achievement and teacher quality—all with an eye toward eliminating disparities. I am particularly interested in interventions that optimize how students and teachers are assigned to schools and classrooms. I also hope to learn from and explore international education contexts in order to identify programs and policies that can inform the U.S. education system. As both a researcher and a practitioner who has benefited from evaluation resources, I hope to develop programs and toolkits that enhance the work of educators in the field.

Most recently, I led data strategy efforts at the Wake County Public School System, having joined as a Strategic Data Project Fellow in 2012. While there, I helped codify an enhanced data- and evidence-use policy, led a diverse series of randomized controlled trials, and developed the district’s research-practice partnership framework. Prior to that, I was a policy analyst at the Southern Regional Education Board, co-founded the education technology company BetterLesson, and taught middle school social studies in the Atlanta Public Schools as a Teach for America corps member. I studied economics and Russian at Wesleyan University and political science at Georgia State University.

Featured Publications

Luke Keele, Matthew A. Lenard, and Lindsay C. Page. Working Paper. “Matching Methods for Clustered Observational Studies in Education”. Publisher's VersionAbstract
Many interventions in education occur in settings where treatments are applied to groups. For example, a reading intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, outcomes across the treated and control groups may differ due to the treatment or due to baseline differences between groups. When this is the case, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed matching methods designed for contexts where treatments are clustered. This form of matching, known as multilevel matching, may be well suited to many education applications where treatments are assigned to schools. In this article, we provide an extensive evaluation of multilevel matching and compare it to multilevel regression modeling. We evaluate multilevel matching methods in two ways. First, we use these matching methods to recover treatment effect estimates from three clustered randomized trials using a within-study comparison design. Second, we conduct a simulation study. We find evidence that generally favors an analytic approach to statistical adjustment that combines multilevel matching with regression adjustment. We conclude with an empirical application.
D. Bulgakov-Cooke, M.A. Lenard, and L.C. Page. Working Paper. “Beyond One-Size-Fits-All: A Randomized Controlled Trial of Multi-Tiered System of Supports”.Abstract
This paper presents final results for a two-year randomized controlled trial of multi-tiered system of supports (MTSS) in the Wake County Public School System. MTSS is a district-wide comprehensive school reform model designed to improve academic outcomes and reduce behavioral incidents through the delivery of tiered supports. MTSS was randomly assigned within 44 school-pairs in fall 2015. Treatment and control groups were balanced along student-level characteristics, prior achievement, and prior behavioral incidents. After the second year of implementation, MTSS did not significantly impact math achievement but did have an empirically small impact on elementary school reading. Hispanic students had the largest gains in reading and math at the elementary and middle school levels. MTSS did not significantly impact high school outcomes and had largely negative impacts on certain subgroups. The intervention did not impact short-term suspension counts and led to an unexpected increase in suspensions among select subgroups. MTSS did not meaningfully impact rates of special education referral or downstream outcomes such as high school graduation and dropout. The results herein suggest that the district’s random assignment of MTSS to a large sample of schools across grade levels did not have the hypothesized effects on achievement or behavior.