Posts tagged "value added"
June 11, 2013
Facing simmering opposition, the State Education Department seems likely to give up on a plan to add more weight to test scores in teacher evaluations.
Education officials have long intended to increase the percentage for which test scores count toward a teacher’s overall evaluation by 5 points, from 20 to 25 percent. A provision in the state’s evaluation law, passed in 2010, allows for the increase if officials adopt a more complex “value-added” model to measure student growth.
Commissioner John King always planned to embrace the option, but his proposal at April’s Board of Regents meeting was met with resistance from members who questioned the methodology’s reliability and asked to shelve the plan. In recent weeks, the state teachers union also lobbied members who were on the fence.
This week, Chancellor Merryl Tisch signaled the pressure was effective, acknowledging that she expected the Board of Regenst to hold off on the proposal when it meets next week.
“This is not the stuff that I feel we go to war over,” Tisch said Monday in a radio interview. (more…)
April 23, 2013
ALBANY — A dozen new factors could be tossed into the state’s formula for measuring how much teachers have boosted their students’ state scores, according to a proposal that is dividing state education policy makers.
The state’s teacher evaluation law, passed in 2010, requires student performance to count in teacher ratings. Currently, the state calculates “growth scores” that count for a fifth of teachers’ overall ratings. But the law allows the state to increase the weight of its score to a quarter of teachers’ ratings once officials adopt a more complex “value-added” model for assessing teacher impact.
Both models are based on the principle that comparing students’ actual test scores with their predicted scores can show the impact their teachers had on their learning. The question is what variables to use when predicting scores so that teachers whose students have greater needs are not at a disadvantage. (more…)
January 8, 2013
Now that the city and teachers union are back at the negotiating table to work on teacher evaluations, the Gates Foundation has some tips.
The foundation today released the third and final report about the Measures of Effective Teaching project, an ambitious three-year study that included 3,000 teachers in seven districts, including New York City. The study concludes that teacher effectiveness can indeed be measured and identifies strategies for grading teachers.
Having multiple people observe the same teacher is more effective than having one person observe the teacher multiple times, the study found. Student surveys are stronger predictors of teachers’ ability to raise test scores than observations. And counting state test scores for a third to half of a teacher’s rating is better than weighting the scores less or more.
With the report, the foundation takes a bold stance on a policy issue that remains hotly contested, even as states and school districts across the country have adopted new evaluation systems. But foundation officials are confident because the latest report reflects a change in the study’s design that they say proves that teacher evaluation systems really do measure teachers. (more…)
August 16, 2012
As of today, school districts across New York State have in hand the first piece of data they would need to calculate some teachers’ ratings: their “growth scores” for last year.
The State Education Department today distributed scores to districts for 36,685 educators who teach reading and math in grades 4-8 or supervise those teachers. The scores — which calculate students’ growth on state math and reading tests, adjusting for the students’ past performance, the performance of similar students, and the reliability of the exams — would count for 20 percent of educators’ ratings under the state’s evaluation law.
Two consecutive “ineffective” ratings could trigger termination proceedings under the law. But the data released today suggest that the state’s current formula for measuring student growth would be unlikely to place many teachers’ jobs at risk.
Nearly 85 percent of the 36,685 educators who received a score fell into the “highly effective” or “effective” ranges. Just 6 percent of them had scores in the “ineffective” range.
Few of the scores issued today will actually be used to evaluate teachers. Most of the state’s 715 school districts, including New York City, have not yet adopted evaluation systems that comply with the state’s evaluation law, and many that have adopted new evaluations won’t use them until next year. (more…)
March 6, 2012
Add one more point of critique to the city’s Teacher Data Reports: Experts and educators are worried about the bell curve along which the teacher ratings fell out.
Like the distribution of teachers by rating across types of schools, the distribution of scores among teachers was essentially built into the “value-added” model that the city used to generate the ratings.
The long-term goal of many education reformers is to create a teaching force in which nearly all teachers are high-performing. However, in New York City’s rankings — which rated thousands of teachers who taught in the system from 2007 to 2010 — teachers were graded on a curve. That is, under the city’s formula, some teachers would always be rated as “below average,” even if student performance increased significantly in all classrooms across the city.
The ratings were based on a complex formula that predicts how students will do — after taking into account background characteristics — on standardized tests. Teachers received scores based on students’ actual test results measured against the predictions. They were then divided into five categories. Half of all teachers were rated as “average,” 20 percent were “above average,” and another 20 percent were “below average.” The remaining 10 percent were divided evenly between teachers rated as “far above average” and “far below average.”
IMPACT, the District of Columbia’s teacher-evaluation system, also uses a set distribution for teacher ratings. As sociologist Aaron Pallas wrote in October 2010, “by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students’ learning.” (more…)
March 1, 2012
New York City schools erupted in controversy last week when the school district released its “value-added” teacher scores to the public after a yearlong battle with the local teachers union. The city cautioned that the scores had large margins of error, and many education leaders around the country believe that publishing teachers’ names alongside their ratings is a bad idea.
Still, a growing number of states are now using evaluation systems based on students’ standardized test-scores in decisions about teacher tenure, dismissal, and compensation. So how does the city’s formula stack up to methods used elsewhere?
The Hechinger Report has spent the past 14 months reporting on teacher-effectiveness reforms around the country and has examined value-added models in several states. New York City’s formula, which was designed by researchers at the University of Wisconsin-Madison, has elements that make it more accurate than other models in some respects, but it also has elements that experts say might increase errors — a major concern for teachers whose job security is tied to their value-added ratings.
“There’s a lot of debate about what the best model is,” said Douglas Harris, an expert on value-added modeling at the University of Wisconsin-Madison who was not involved in the design of New York’s statistical formula. The city used the formula from 2007 to 2010 before discontinuing it, in part because New York State announced plans to incorporate a different formula into its teacher evaluation system. (more…)
February 29, 2012
The New York Times’ first big story on the Teacher Data Reports released last week contained what sounded like great news: After years of studies suggesting that the strongest teachers were clustered at the most affluent schools, top-rated teachers now seemed as likely to work on the Upper East Side as in the South Bronx.
Teachers with high scores on the city’s rating system could be found “in the poorest corners of the Bronx, like Tremont and Soundview, and in middle-class neighborhoods,” “in wealthy swaths of Manhattan, but also in immigrant enclaves,” and “in similar proportions in successful and struggling schools,” the Times reported.
Education analyst Michael Petrilli called the findings “jaw-dropping news” that “upends everything we thought we knew about teacher quality.”
Except it’s not really news at all. Value-added measurements like the ones used to generate the city’s Teacher Data Reports are designed precisely to control for differences in neighborhood, student makeup, and students’ past performance.
The adjustments mean that teachers are effectively ranked relative to other teachers of similar students. Teachers who teach similar students, then, are guaranteed to have a full range of scores, from high to low. And, unsurprisingly, teachers in the same school or neighborhood often teach similar students.
“I chuckled when I saw the first [Times story], since the headline pretty much has to be true: Effective and ineffective teachers will be found in all types of schools, given the way these measures are constructed,” said Sean Corcoran, a New York University economist who has studied the city’s Teacher Data Reports. (more…)
February 28, 2012
The Department of Education released a final installment of Teacher Data Reports today, for teachers in charter schools and schools for the most severely disabled students.
Last week, the city released the underlying data from about 53,000 reports for about 18,000 teachers who received them during the project’s three-year lifespan. Teachers received the reports between 2008 and 2010 if they taught reading or math in grades 4 through 8.
When the department first announced that it would be releasing the data in response to several news organizations’ Freedom of Information Law requests, it indicated that ratings for teachers in charter schools would not be made public. It reversed that decision late last week and today released “value-added” data for 217 charter school teachers.
Participation in the data reports program was optional for charter schools and some schools entered and exited the program in each year that it operated, with eight schools participating in 2007-2008 and 18 participating in 2009-2010. At the time, the city had about 100 charter schools.
The department also released reports for 50 teachers in District 75 schools, which enroll the city’s most severely disabled students. The number is small because few District 75 students take regular state math and reading exams. Also, District 75 classes are typically very small, and privacy laws led the city to release data for teachers who had more than 10 students take state tests. District 75 also teachers received reports only in 2008 and 2010; the program was optional in the district’s schools in 2009.
Department officials cautioned last week that the reports had high margins of error — 35 percentage points for math teachers and 53 percentage points for reading teachers, on average — and urged caution when interpreting them. (more…)
February 24, 2012
In October 2010, when the city first said it would fulfill a Freedom of Information Law request and release individual teachers’ ratings to news organizations, teachers started buzzing about what the scores would mean — and what they wouldn’t.
One of them was Stephen Lazar, a high school teacher, who listed 18 elements of teaching and learning in his classroom that his students’ state tests didn’t take into account. The list appeared in the GothamSchools Community section at the time.
This week, Lazar re-posted the piece on his personal blog, Outside the Cave, and added a note expressing astonishment that news organizations would be going ahead with publishing the scores alongside teachers’ names. (Lazar is part of an informal advisory group for GothamSchools but was not consulted on our decision not to publish individual teachers’ ratings.)
Lazar was discussing his students’ exam scores and not the kind of “value-added” measure contained in the Teacher Data Reports that tries to show students’ growth compared to their expected growth. Also, Lazar’s students took Regents exams, not the grades 3-8 state tests factored into the ratings being released today. Still, his list provides a useful reminder about the limitations of using test scores as a single measure of teacher quality on a day when New Yorkers are likely to be tempted to do just that.
Here’s an excerpt:
- [Test scores] don’t tell you that that I spent six weeks in the middle of the year teaching my students how to do college-level research. I estimate this costs my students an average of 5-10 points on the Regents exam.
- They don’t tell you that when you ask my students who are now in college why they are succeeding when most of their urban public school peers are dropping out, they name that research project as one of their top three reasons nearly every time.
- They don’t tell you which of my students had a home and a healthy meal the night before the test.
- They don’t tell you that 20 percent of our seniors come to me every year for letters of recommendation because they feel they did their best work in my class.
February 23, 2012
Tomorrow’s planned release of 12,000 New York City teacher ratings raises questions for the courts, parents, principals, bureaucrats, teachers — and one other party: news organizations. The journalists who requested the release of the data in the first place now must decide what to do with it all.
At GothamSchools, we joined other reporters in requesting to see the Teacher Data Reports back in 2010. But you will not see the database here, tomorrow or ever, as long as it is attached to individual teachers’ names.
The fact is that we feel a strong responsibility to report on the quality of the work the 80,000 New York City public school teachers do every day. This is a core part of our job and our mission.
But before we publish any piece of information, we always have to ask a question. Does the information we have do a fair job of describing the subject we want to write about? If it doesn’t, is there any additional information — context, anecdotes, quantitative data — that we can provide to paint a fuller picture?
In the case of the Teacher Data Reports, “value-added” assessments of teachers’ effectiveness that were produced in 2009 and 2010 for reading and math teachers in grades 3 to 8, the answer to both those questions was no.
We determined that the data were flawed, that the public might easily be misled by the ratings, and that no amount of context could justify attaching teachers’ names to the statistics. When the city released the reports, we decided, we would write about them, and maybe even release Excel files with names wiped out. But we would not enable our readers to generate lists of the city’s “best” and “worst” teachers or to search for individual teachers at all.
It’s true that the ratings the city is releasing might turn out to be powerful measures of a teacher’s success at helping students learn. The problem lies in that word: might. (more…)