by Max Brantley
So-called value added evaluations of teachers are the flavor of the day. Test scores can provide an absolute measure of the effectiveness of teachers, the supporters say. You just measure the progress of each student during a year, crunch numbers and, presto, sheep and goats are separated.
The New York Times examines the topic today and points out what many have long said — the measurements can be unreliable. Naturally the inventor of the method in Tennessee says not. The minute you read, however, that some "assumptions" are made in calculating the numbers, you know, at a minimum, that there's room for subjectivity and misinterpretation.
No doubt, such a rating system has some value.
But when the method is used to evaluate individual teachers, many factors can lead to inaccuracies. Different people crunching the numbers can get different results, said Douglas N. Harris, an education professor at the University of Wisconsin, Madison. For example, two analysts might rank teachers in a district differently if one analyst took into account certain student characteristics, like which students were eligible for free lunch, and the other did not.
Millions of students change classes or schools each year, so teachers can be evaluated on the performance of students they have taught only briefly, after students’ records were linked to them in the fall.
In many schools, students receive instruction from multiple teachers, or from after-school tutors, making it difficult to attribute learning gains to a specific instructor. Another problem is known as the ceiling effect. Advanced students can score so highly one year that standardized state tests are not sensitive enough to measure their learning gains a year later.
It's the old story. People want to make things simple, to reduce even the most complex of challenges to a binary option — yes/no, good/bad, up/down. If only it were so simple. The DOG today editorializes for moving ahead with this value-added testing and spreading every teacher's score on the public record (a personnel record examination not required of any other public employee, from cop to janitor.) Consider when evaluating that idea the example of the teacher of high-achieving students in Houston who doesn't qualify for value-added bonuses because it's hard to move A students much during the course of the year. Her students start at A and end at A. Has this teacher failed? The value-added system says she has.