When an employee expects repeated evaluation and performance incentives over time, the potential future rewards create an incentive to invest in building relevant skills. Because new skills benefit job performance, the effects of an evaluation program can persist after the rewards end or even anticipate the start of rewards. I test for persistence and anticipation effects, along with more conventional predictions, using a quasi-experiment in Tennessee schools. Performance improves with new evaluation measures, but gains are larger when the teacher expects future rewards linked to future scores. Performance rises further when incentives start and remains higher even after incentives end.
We study teachers’ choices about how to allocate class time across different instructional activities, for example, lecturing, open discussion, or individual practice. Our data come from secondary schools in England, specifically classes preceding GCSE exams. Students score higher in math when their teacher devotes more class time to individual practice and assessment. In contrast, students score higher in English if there is more discussion and work with classmates. Class time allocation predicts test scores separate from the quality of the teacher’s instruction during the activities. These results suggest opportunities to improve student achievement without changes in teachers’ skills.
We study the returns to experience in teaching, estimated using supervisor ratings from classroom observations. We describe the assumptions required to interpret changes in observation ratings over time as the causal effect of experience on performance. We compare two difference-in-differences strategies: the two-way fixed effects estimator common in the literature, and an alternative which avoids potential bias arising from effect heterogeneity. Using data from Tennessee and Washington, DC, we show empirical tests relevant to assessing the identifying assumptions and substantive threats—e.g., leniency bias, manipulation, changes in incentives or job assignments—and find our estimates are robust to several threats.
I study the effects of a labor-replacing computer technology on the productivity of classroom teachers. Focusing on one occupation—and a setting where both workers and their job responsibilities remain fixed—provides an opportunity to examine the heterogeneity of effects on individual productivity. In a series of field-experiments, teachers were provided computer-aided instruction (CAI) software for use in their classrooms; CAI provides individualized tutoring and practice to students one-on-one with the computer acting as the teacher. In math classes, CAI reduces by one-fifth the variance of teacher productivity, as measured by student test score gains. The smaller variance comes both from productivity improvements for otherwise low-performing teachers, but also losses among high-performers. The change in productivity partly reflects changes in teachers’ level of work effort and teachers’ decisions about how to allocate class time. How computers affect teacher decisions and productivity is immediately relevant to both ongoing education policy debates about teaching quality and the day-to-day management of a large workforce.