Journal Article
Aucejo, Esteban, Teresa Romano, and Eric S. Taylor. 2022. “Does evaluation change teacher effort and performance? Quasi-experimental evidence from a policy of retesting students”. Review of Economics and Statistics 104 (3):417–430. Publisher's VersionAbstract

We document measurable, lasting gains in student achievement caused by a change in teachers’ evaluation incentives. A short-lived rule created a discontinuity in teachers’ incentives when allocating effort across their assigned students: students who failed an initial end-of-year test were retested a few weeks later, and then only the higher of the two scores was used when calculating the teacher’s evaluation score. One year later, long after the discontinuity in incentives had ended, retested students scored 0.03σ higher than non-retested students. Otherwise identical students were treated differently by teachers because of evaluation incentives, despite arguably equal returns to teacher effort.

evaluation-effort-art.pdf evaluation-effort-appendix-art.pdf
Burgess, Simon, Shenila Rawal, and Eric S. Taylor. 2021. “Teacher peer observation and student test scores: Evidence from a field experiment in English secondary schools.”. Journal of Labor Economics 39 (4):1155-1186. Publisher's VersionAbstract

This paper reports on a field experiment in 82 high schools trialing a low-cost intervention in schools’ operations: teachers working in the same school observed and scored each other’s teaching. Students in treatment schools scored 0.07σ higher on math and English exams. Teachers were further randomly assigned to roles—observer and observee—and students of both types benefited, observers’ students perhaps more so. Doubling the number of observations produced no difference in student outcomes. Treatment effects were larger for otherwise low-performing teachers.

peer-evaluation-brt.pdf peer-evaluation-appendix-brt.pdf
Papay, John P., Eric S. Taylor, John H. Tyler, and Mary Laski. 2020. “Learning job skills from colleagues at work: Evidence from a field experiment using teacher performance data”. American Economic Journal: Economic Policy 12 (1):359-388. Publisher's VersionAbstract

We study a program designed to encourage learning from coworkers among school teachers. In an experiment, we document gains in job performance when high- and low-skilled teachers are paired and asked to work together on improving their skills. Pairs are matched on specific skills measured in prior evaluations. Each pair includes a target teacher who scores low in one or more of nineteen skills, and a partner who scores high in (many of) the target’s deficient skills. Student achievement improved 0.12 standard deviations in low-skilled teachers’ classrooms. Improvements are likely the result of target teachers learning skills from their partner.

learning-from-colleagues-pttl.pdf learning-from-colleagues-appendix-pttl.pdf
Taylor, Eric S. 2018. “Skills, job tasks, and productivity in teaching: Evidence from a randomized trial of instruction practices”. Journal of Labor Economics 36 (3):711-742. Publisher's versionAbstract

I study how teachers’ assigned job tasks—the basic practices they are asked to use in the classroom—affect the returns to math skills in teacher productivity. The results demonstrate the importance of distinguishing between workers’ skills and workers’ job tasks. I examine a randomized trial of different approaches to teaching math, each approach codified in a set of day-to-day tasks. Teachers were tested to measure their math skills. Teacher productivity—measured with student test scores—is increasing in math skills when teachers use conventional “direct instruction” practices: explaining and modeling math rules and procedures. The relationship is weaker, perhaps negative, for newer “student-led” instruction tasks.

Jacob, Brian A., Jonah E. Rockoff, Eric S. Taylor, Benjamin Lindy, and Rachel Rosen. 2018. “Teacher applicant hiring and teacher performance: Evidence from DC Public Schools”. Journal of Public Economics 166:81-97. Publisher's versionAbstract

Selecting more productive employees among a pool of job applicants can be a cost-effective means of improving organizational performance and may be particularly important in the public sector. We study the relationship among applicant characteristics, hiring outcomes, and job performance for teachers in the Washington DC Public Schools. Applicants’ academic background (e.g., undergraduate GPA) is essentially uncorrelated with hiring. Screening measures (written assessments, interviews, and sample lessons) help applicants get jobs by placing them on a list of recommended candidates, but they are only weakly associated with the likelihood of being hired conditional on making the list. Yet both academic background and screening measures strongly predict teacher job performance, suggesting considerable scope for improving schools via the selection process.

Bettinger, Eric P., Lindsay Fox, Susanna Loeb, and Eric S. Taylor. 2017. “Virtual classrooms: How online college courses affect student success”. American Economic Review 107 (9):2855-2875. Publisher's VersionAbstract

Online college courses are a rapidly expanding feature of higher education, yet little research identifies their effects relative to traditional in-person classes. Using an instrumental variables approach, we find that taking a course online, instead of in-person, reduces student success and progress in college. Grades are lower both for the course taken online and in future courses. Students are less likely to remain enrolled at the university. These estimates are local average treatment effects for students with access to both online and in-person options; for other students online classes may be the only option for accessing college-level courses.

Bettinger, Eric P., Christopher Doss, Susanna Loeb, Aaron Rogers, and Eric S. Taylor. 2017. “The effects of class size in online college courses: Experimental evidence”. Economics of Education Review 58:68-85. Publisher's VersionAbstract

Class size is a first-order consideration in the study of education cost and effectiveness. Yet little is known about the effects of class size on student outcomes in online college classes, even though online courses have become commonplace in many institutions of higher education. We study a field experiment in which college students were quasi-randomly assigned to either regular sized classes or slightly larger classes. Regular classes had, on average, 31 students and treatment classes were, on average, ten percent larger. The experiment was conducted at DeVry University, one of the nation's largest for-profit postsecondary institutions, and included over 100,000 student course enrollments in nearly 4,000 classes across 111 different undergraduate and graduate courses. We examine class size effects on student success in the course and subsequent persistence in college. We find little evidence of effects on average or for a range of course types. Given the large sample, our estimates are precise, suggesting that small class size changes have little impact in online settings.

Bettinger, Eric P., Bridget Terry Long, and Eric S. Taylor. 2016. “When inputs are outputs: The case of graduate student instructors”. Economics of Education Review 52:63-76. Publisher's VersionAbstract

We examine graduate student teaching as an input to two production processes: the education of undergraduates and the development of graduate students themselves. Using fluctuations in full-time faculty availability as an instrument, we find undergraduates are more likely to major in a subject if their first course in the subject was taught by a graduate student, a result opposite of estimates that ignore selection. Additionally, graduate students who teach more frequently graduate earlier and are more likely to subsequently be employed by a college or university.

Taylor, Eric S. 2014. “Spending more of the school day in math class: Evidence from a regression discontinuity in middle school”. Journal of Public Economics 114:162-181. Publisher's VersionAbstract

For students whose math skills lag expectations, public schools often increase the fraction of the school day spent on math instruction. Studying middle-school students and using regression discontinuity methods, I estimate the causal effect of requiring two math classes—one remedial, one regular—instead of just one class. Math achievement grows much faster under the requirement, 0.16–0.18 student standard deviations. Yet, one year after returning to a regular one-class schedule, the initial gains decay by as much as half, and two years later just one-third of the initial treatment effect remains. This pattern of decaying effects over time mirrors other educational interventions—assignment to a more skilled teacher, reducing class size, retaining students—but spending more time on math carries different costs. One cost is notable, more time in math crowds out instruction in other subjects.

Taylor, Eric S., and John H. Tyler. 2012. “The effect of evaluation on teacher performance”. American Economic Review 102 (7):3628-3651. Publisher's VersionAbstract

Teacher performance evaluation has become a dominant theme in school reform efforts. Yet, whether evaluation changes the performance of teachers, the focus of this paper, is unknown. Instead, evaluation has largely been studied as an input to selective dismissal decisions. We study mid-career teachers for whom we observe an objective measure of productivity -- value-added to student achievement -- before, during, and after evaluation. We find teachers are more productive in post-evaluation years, with the largest improvements among teachers performing relatively poorly ex-ante. The results suggest teachers can gain information from evaluation and subsequently develop new skills, increase long-run effort, or both.

Rockoff, Jonah E., Douglas O. Staiger, Thomas J. Kane, and Eric S. Taylor. 2012. “Information and employee evaluation: Evidence from a randomized intervention in public schools”. American Economic Review 102 (7):3184-3213. Publisher's VersionAbstract

We examine how employers learn about worker productivity in a randomized pilot experiment which provided objective estimates of teacher performance to school principals. We test several hypotheses that support a simple Bayesian learning model with imperfect information. First, the correlation between performance estimates and prior beliefs rises with more precise objective estimates and more precise subjective priors. Second, new information exerts greater influence on posterior beliefs when it is more precise and when priors are less precise. Employer learning affects job separation and productivity in schools, increasing turnover for teachers with low performance estimates and producing small test score improvements. (JEL D83, I21, J24, J45)

Kane, Thomas J., Eric S. Taylor, John H. Tyler, and Amy L. Wooten. 2011. “Identifying effective classroom practices using student achievement data”. Journal of Human Resources 46 (3):587-613. Publisher's VersionAbstract

Research continues to find large differences in student achievement gains across teachers’ classrooms. The variability in teacher effectiveness raises the stakes on identifying effective teachers and teaching practices. This paper combines data from classroom observations of teaching practices and measures of teachers’ ability to improve student achievement as one contribution to these questions. We find that observation measures of teaching effectiveness are substantively related to student achievement growth and that some observed teaching practices predict achievement more than other practices. Our results provide information for both individual teacher development efforts, and the design of teacher evaluation systems.

Taylor, Eric S. Forthcoming. “Teacher Evaluation and Trainingedited by Eric A. Hanushek, Stephen Machin, and Ludger Woessmann. Handbook of the Economics of Education, Volume 7.Abstract

Evaluation and training are important features of the employment relationship between teachers and the schools they work for. The first feature, evaluation, involves performance measures and often performance incentives linked to those measures, like bonuses or the threat of dismissal. This chapter reviews research on whether and how evaluation and incentives change teaching, including unintended effects. Potential mechanisms include changes in a teacher’s effort or skills, or changes in the composition of the teacher workforce through selection. Many (quasi-)experiments document increases in the measures used to determine rewards or consequences for teachers, but it is less clear whether those increases represent improvements in student learning or welfare. Research on the second feature, training, typically focuses on formal training programs, where evidence of benefits is inconsistent at best. This chapter reviews evidence on both formal training, as well as informal ways in which teachers appear to learn new skills at work.

Lovison, Virginia, and Eric S. Taylor. 2018. “Can teacher evaluation programs improve teaching?”. Getting Down to Facts II, Stanford University. Publisher's Version evaluation-improve-teaching-lt.pdf
Loeb, Susanna, Agustina Paglayan, and Eric S. Taylor. 2014. “Understanding human resources in broad-access higher educationedited by Mitchell Stevens and Michael Kirst. Remaking college: The changing ecology of higher education. Publisher's Version hr-broad-access-lpt.pdf
Taylor, Eric S., and John H. Tyler. 2012. “Can teacher evaluation improve teaching?”. Education Next 12 (4):78-84. Publisher's version evaluation-improve-teaching-tt.pdf
Kane, Thomas J., Eric S. Taylor, John H. Tyler, and Amy L. Wooten. 2011. “Evaluating teacher effectiveness”. Education Next 11 (3):55-60. Publisher's version evaluating-effectiveness-kttw.pdf
Tyler, John H, Eric S Taylor, Thomas J Kane, and Amy L Wooten. 2010. “Using student performance data to identify effective classroom practices”. American Economic Review, Papers and Proceedings 100 (2):256-260. Publisher's Version