The new teacher evaluation system in place in N.Y.C. is based upon junk science. Sixty percent of the evaluation is derived from Danielson-based observations, using multiple domains, some of which seem far removed from many of the realities of N.Y.C. public schools, the lack of resources and the needs of many of its students who speak little-to-no English. Teachers are also evaluated on state or comparable measures (20%, including Regents exams) and locally selected measures (another 20% which in all likelihood is also based on standardized tests in most schools). Standardized testing may account for as much as 40% of a teacher's score. The irony of it all is that the same teaching methods that might score high under Danielson will do very poorly as test prep.
No matter how smug some statisticians may feel about the worthiness of value-added measures, standardized test scores tell us much more about factors other than a student's current subject teacher. A teacher can fill students' educational plates, so to speak. (It should be noted, in my opinion, test prep is not a very tasty or healthy "dish"). Some students will greedily partake and clean their plates. Others may not be hungry at all. Teachers can try to make the plates more attractive and add condiments, so to speak, but still some students may not eat well, some for reasons we may understand and others for reasons we may never know.
To add to the absurdity of it all, most teachers to some extent, and more so those who do not teach Regents classes, will be evaluated by the test scores of students with whom they may have no academic connection whatsoever. And here's the rub: the people who have designed there "whacked" systems of junk science seem accountable to no one.