The 2007 Expect Success Annual Report reveals academic gains for OUSD students since 2002.¹ Gains are very important, but shouldn’t a narrowing of the achievement gap be demonstrated by now since there have been so many local, state and federal efforts over the last several years to make schools “accountable” for educating ALL students? My look at OUSD's Annual Measurable Objectives (AMO’s) from 2002-2007 showed that test scores have increased, but that the achievement gap actually increased as well.²
As far the overall increase in test scores goes, I thought it would be wise to sort out the internal factors (such as the initiatives of Expect Success) from the external factors (like the state and federal demands). I speculated that, at the least, gains have been due to standardization efforts. Then I thought I would get the opinion of Steve W., a veteran OUSD teacher who has an extensive body of knowledge about testing. Not only does Steve understand the testing process and how to interpret the data, but he would NOT have a motive for inflating the data for PR purposes. It was inevitable that his comments would add useful layers to help explain what is really going on.³
Steve believes that the Academic Performance Index (API) is a better indicator than AMO’s for looking at improvement because it factors ALL student levels into its formula. As Steve explained, “…looking at the AMO only counts the number of students Proficient and above, so if a group had lots of students just below the Proficient level, that group would be more likely to show improvement than a group where most of the students are Below Basic…” This point makes sense. I originally chose the AMO’s because those figures are the ones that NCLB looks at for determining a school’s future; they supersede the API, the state's measure of accountability.
Steve said that by looking at the AMO's, “…better performing groups would show a bigger increase in the percent of students Proficient than lower achieving groups.” This insight explains what I originally found.
So, using the new information from Steve, I thought I’d compare the API figures of 2002 and 2007. Here is what I discovered:
1. The achievement gap between Asian and African American students increased 31 points. The gap was 145 points in 2002 (684 minus 539) and 176 points in 2007 (778 minus 602).
2. The achievement gap between White and African American students increased 13 points. The gap was 267 points in 2002 (806 minus 539) and 280 points in 2007 (882 minus 602).
3. The achievement gap between Asian and Latino students decreased 28 points. The gap was 190 points in 2002 (684 minus 494) and 162 percentage points in 2007 (778 minus 616).
4. The achievement gap between White and Latino students decreased 46 points. The gap was 312 points in 2002 (806 minus 494) and 266 in 2007 (882 minus 616).
When analyzing the achievement gap using API scores, the picture is brighter for Latino students. For African American students the news is still not good. The achievement gap between the African American and the Asian and White subgroups has increased over the past several years. When comparing Latino students to African American students in 2002, Latino students had the lower API (494 to 539). The 2007 results show that they have now surpassed African American students (616 to 602).
Steve explained that, from observing for a number of years he has learned that, “... there is rarely a one-to-one correlation between education program changes and test scores. Just when you think you have a pattern figured out, the next year’s data doesn’t support your hypothesis.” He mentioned other things that are never publicized but are definitely part of the picture, for instance, “Fair measurement is also made more difficult because the tests are not the same every year.” Changes in the way tests are scored can also affect the numbers. Steve's example was about seventh grade writing test scores which shot up throughout the state last year. He subsequently learned that, “…the state had changed the weighting it gave to mechanical errors in scoring the tests, so the improvement was not a sign of writing [improvement], it was a sign of the scoring getting easier.”
Steve describes other contributors that may have increased test scores over the years, for instance, eliminating practices that used to depress test scores previously, such as “…telling students that the tests don’t matter or having students test too long each day.” Also playing a role is the heightened awareness on the part of parents, students, teachers, etc. about the importance of these tests. Additionally, teachers are better able to prepare the students for the tests because they are more familiar with the format and content of the tests. And of course, textbooks are also better aligned to the tests.
Steve’s concluding opinion is that “… most of the improvement in test scores is artificial, based on changes in how components of the tests are weighted and not real improvements in students’ knowledge…” and adds, “… the jury is still out on the question of whether some of it reflects real improvement.”
As far as OUSD subgroups go, he wonders if “… some of the gains for Hispanics and Asians represent a decrease in the percentage of students in those groups who are recent immigrants.”
As far as the smaller gains for African American students, noticing that this group had the biggest drop in enrollment between 2002 and 2007, Steve suggests the likelihood that a set of African American families with higher incomes removed their children from OUSD by moving out of Oakland during this time and adds, “If you remove some of the highest performers from any group, that group’s scores will suffer.” The inverse would be true, too, and is one indisputable rationale that solidly counters the fantastic claims of certain charter schools. The frank discussion of this factor is too frequently avoided.
So who knows where things stand after all? If we're going to fixate and judge schools by numbers that supposedly reflect gains, or stagnation, in student performance, wouldn't it be nice if more of the subtle factors that influence these numbers were objectively presented in reports to the public?
¹ The figures are on pages 11 and 12. Read the report at http://public.ousd.k12.ca.us/docs/13668.pdf.
² “Who would have guessed?” Perimeter Primate posting (May 11, 2008)
³ Comment #23 on The Education Report, entry for May 13, 2008, “The elephant in the race.” http://www.ibabuzz.com/education/2008/05/13/the-elephant-in-the-race/
1 comment:
My fundamental problem with the API and the testing in general is that the dynamics are flawed to start with. Using a multiple choice model (STAR) as a basis for any kind of measurement is a flawed concept; the test does not determine content knowledge - at nest it determine content recognition and good guessing skills (since 2 of the 5 options are just plain stupid,anyway) and yet it has become the standard by which to determine success or failure. Even the test creators have publicly acknowledged the failure of STAR in being used as the yardstick.
If you went to a foreign country, was given a multiple choice test and scored well, it does not imply you know anything. It is far time NCLB and the whole pathetic process by which it became a political tool, was put to rest. Our students are not succeeding because the testing mentality has so polluted the educational waters that rich content, diversity and firing up the minds of students has been replaced by mediocrity and a fear of being fingered as being inept.
I enjoy your blog and will be linking it to my own at misterwriter.com . MisterWriterLives
Post a Comment