This is the executive summary of the statement of the American Statistical Association on the use of value-added assessment to evaluate teachers. Please share it with other teachers, with principals, and school board members. Please share it with your legislators and other elected officials. Send it to your local news outlets. The words are clear: Teachers account for between 1 and 14% of the variation in test scores. And this is very important to remember: “Ranking teachers by their VAM scores can have unintended consequences that reduce quality.”
ASA Statement on Using Value-Added Models for Educational Assessment
April 8, 2014
Many states and school districts have adopted Value-Added Models (VAMs) as part of educational accountability systems. The goal of these models, which are also referred to as Value-Added Assessment (VAA) Models, is to estimate effects of individual teachers or schools on student achievement while accounting for differences in student background. VAMs are increasingly promoted or mandated as a component in high-stakes decisions such as determining compensation, evaluating and ranking teachers, hiring or dismissing teachers, awarding tenure, and closing schools.
The American Statistical Association (ASA) makes the following recommendations regarding the use of VAMs:
- The ASA endorses wise use of data, statistical models, and designed experiments for improving the quality of education.
- VAMs are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results.
- Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes.
o VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.
o VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.
o Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.
• VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.