November 25, 2009
I’d like to respond to some of the reactions to Friday’s post:
1. Cut scores: Contrary to Leonie Haimson’s allegation, we did not determine the percentage of A grades after learning the results of the 2009 state tests. The cut scores for the elementary and middle school progress reports were set in September 2008 and communicated to principals in the Sept. 23, 2008, mailing of Principals’ Weekly (pasted at the end of this post) — long before the state tests were even administered. The two educator guides Ms. Haimson cites correspond to different years — one is for the 2007-08 progress report and the other is an updated version for the 2008-09 progress report.
We raised the cut scores significantly from 2007-08 to 2008-09 to reflect the progress schools had made. However, the gains our schools achieved in 2008-09 surpassed anything we had seen during the last few years. Had we been able to forecast this growth, we would have set the cut scores even higher.
2. Multiple years of data — We use three years of data to establish benchmarks for comparing schools’ performance and progress. Put another way, the range of scores that determine each school’s peer group are based on three years of achievement data.
That said, we look only the most recent year’s results when determining progress report grades because it is critical that schools focus on their students’ achievement every single year. Using three years of results would allow schools that performed well for one or two years to mask poor results in a third year. Also, our high school progress reports are based on hundreds — and in many cases thousands — of individual student-level outcomes across multiple measures for each school. In this regard, we are very comfortable with the level of statistical rigor reflected in the results.
3. Small schools vs. large schools — In my original post, I explained why we do not control for a school’s size when determining peer groups and also provided data showing that on average the new small schools opened under Mayor Bloomberg and Chancellor Klein have outperformed all other schools on the progress report. This data is not inconsistent with the fact that many other schools — large schools as well as small schools opened prior to this administration — also performed well.
In addition, data do not support the claim that small schools serve less challenging populations. In fact, as the chart below shows, small schools serve more challenging populations in every high-need demographic category with the exception of special education, where there is parity. Small schools are not always better than large schools, but small schools in New York City are more likely to be successful with high-need students. Based on my experience as a teacher and a principal, I would argue that this has a lot to do with the size of the principal’s class. In a small school, the leader is likely to be supervising only 30 teachers, compared to large schools where there are often 150+ teachers. The small school structure has fewer administrative layers, making it much easier to know and support each teacher’s individual needs-which in turn enables teachers to do the same for their students.
4. Class size — Class size is not correlated with progress report score. As the chart below shows, the average class size at schools receiving each letter grade on the high school progress report varies little.
This is not to say that adjusting groupings of students and teachers should be ignored as a possible strategy to increase teacher effectiveness and student achievement. Again, in my own experience as a high school principal, I found that a powerful way to increase the effectiveness of my teachers was to reduce the total number of students for whom each of my teachers was responsible throughout the day and week. As UCLA Professor William Ouchi has demonstrated, when teachers’ total student load decreases, student performance increases.
5. Credits and Regents — Passing five Regents Exams and earning 44 course credits are the core requirements defined by New York State for graduation, and the New York City accountability system is based on these measures. Failing to measure credit accumulation would return us to a system where principals are not accountable for student learning.
Unfortunately, there have always been charges of cheating in our system, which are dealt with when they are substantiated. The vast majority of school leaders and teachers approach this part of their work professionally and report it when they see a colleague do otherwise. Staff from New York State and New York City randomly monitor the administration and scoring of Regents examinations. As with the Grades 3-8 Math and ELA examinations, any reports of mis-administration or security breaches are immediately reported for appropriate action. Schools where such reports have been made receive additional monitoring during the scoring and administration process.
Regarding credit recovery, our schools follow the state’s guidelines for awarding these credits. When a student doesn’t pass a required course or doesn’t complete all of the necessary coursework, the student must make up that work; this is the practice we refer to as “credit recovery” and it is a sensible and longstanding practice in schools nationwide. Credit recovery can be achieved in several ways, including retaking an entire course during the school year or attending summer school. In addition, as the State Education Department recently explained: “Sometimes students may come close to passing a course and may have deficiencies only in certain clearly defined areas of knowledge and skill. In those cases, it may not be necessary for the student to retake the entire course. Instead, the student might be permitted to make up those deficiencies, master the appropriate standards, and receive credit.” Like any other process, credit recovery can be abused. This abuse hurts students and is cause for disciplinary action. To that end, we have been working with the state to establish clear guidelines and processes for credit recovery. In October, the Board of Regents adopted a policy for making up course credit and directed districts to draft regulations to implement that policy. We will continue to work with the state to implement the policy and regulations in New York City.
Finally, I’d like to echo the suggestion that we focus our efforts on preparing our students for college. We are beginning to work on the complicated task of tracking students’ performance through their first years after high school and look forward to using this kind of data in the future to increase the rigor of the progress reports.
From the Sept. 23, 2008, Principals’ Weekly:
2008-09 Progress Report Cut Scores Elementary and middle schools
When they released elementary and middle school Progress Reports last week, Mayor Bloomberg and Chancellor Klein celebrated the accomplishments of schools in increasing student progress across New York City and noted that those gains, along with the work that remains to be done, make it appropriate to raise the bar – something we said we would do as Progress Report grades rise. We are setting new cut scores that elementary, middle, and K-8 schools need to achieve to earn a grade of A, B, C, or D on next year’s Progress Report. The new cut scores are as follows:
A – Progress Report score of 68+
B – Progress Report score from 54 to 67.9
C – Progress Report score from 43 to 53.9
D – Progress Report score from 33 to 42.9
F – Progress Report score less than 33
In the October release of the ITT, schools will receive a new Progress Report data file that contains the school’s Progress Report “modeler.” These new cut scores will be built into the modeler to allow schools to run scenarios that help them forecast how they may perform on next year’s Progress Report using these new cut scores.