Confused? So are we.
Parents, teachers, kids -- everyone involved in public education in Colorado, it seems, eagerly awaited this fall's release of the School Accountability Reports (SAR). Pushed by Governor Bill "No Excuses" Owens, the reports represent the first statewide effort to evaluate schools objectively, using student scores on standardized tests to rank overall academic performance.
The reports were released in September. But the buzz over all those harsh grades -- Denver earned the ignominy of having eleven "unsatisfactory" elementary schools, five super-dissed middle schools, and the five lowest-scoring high schools in the state -- quickly faded. As it turned out, the reporting process was not without flaws. Several alternative schools weren't rated at all; small schools saw their rankings soar or crash on the statistical vagaries of just a few students' performances; and some suburban school districts hollered that questionable data-gathering procedures saddled their shiny academies with "high" rather than "excellent" rankings.
Errors have been found in numerous categories of data gathered for the SAR, from class size to test scores to teachers' salaries. The reports have also been blasted for relying primarily on Colorado Student Assessment Program (CSAP) test results for the overall rankings; critics argue that the test doesn't begin to provide a complete picture of how schools cope with large bilingual programs, soaring student turnover rates and other formidable challenges.
"I don't think you can trust [the SAR] at all," says Vicki Newell, director of public policy for the Colorado PTA. "We feel it stigmatizes children as well as schools."
Perhaps the protests could have been avoided if the state bean counters had allowed for greater variation in the types of school experience under evaluation. Standard test scores, okay, but should all schools be judged by the same inflexible formula? We think not.
Consider how much easier the reports would be to understand if they took into account the vast differences between inner-city, suburban and alternative schools. That equation would result in much more accurate report cards, such as the hypothetical examples to the right.
FREQUENTLY ASKED QUESTIONS
Q. What is CSAP?
A. The Colorado Student Assessment Program is an evaluation tool administered to Colorado students that measures their ability to reach state-determined academic content standards. Or, to put it another way: It's a big, mean, scary-ass test that most politicians in this state would flunk dismally but that ninth-graders are expected to ace with little or no preparation.
Q. What happens to schools that receive "excellent" ratings on their report cards?
A. The staff are offered hearty congratulations and modest cash rewards, usually in the form of small grants to aid in the ongoing quest for excellence.
Q. What happens to schools ranked "unsatisfactory?"
A. They are punished severely. A troll from the Colorado Department of Education stands at the school's front doors and shouts, "Shame! Shame!" Then he flogs the principal with a check, known as an "improvement grant." The grant -- $150,000 for elementary schools, $200,000 for middle schools and $250,000 for high schools- is supposed to make everyone feel worse by making them realize how much money they're going to have to spend to fix things.
Q. Seriously, what good are these report cards?
If you like this story, consider signing up for our email newsletters.
SHOW ME HOW
You have successfully signed up for your selected newsletter(s) - please keep an eye on your mailbox, we're movin' in!
A. Seriously, the overall rankings may be the least informative aspect of the reports. Although the process still has many bugs in it (no one budgeted the cash to print the reports in Spanish, for example), the SAR does contain a lot of information parents didn't have before about the funding of their school district and the stability of staff.
Jared Polis, member-at-large on the Colorado State Board of Education, says he's had "tremendous feedback" about the reports from parents. "There are hundreds of things on there besides test scores," he notes. "They're also supposed to be useful to taxpayers. We feel people are more willing to invest in education when they have some idea of what the money is being used for."
Polis says the raw data behind the reports should be particularly useful to individual schools. The real test comes next year, though, when the reports begin to track a school's performance over time. After all, the fact that one school has more proficient readers than another can be the result of a number of factors, but how well is the "excellent" school teaching kids who are already reading above grade level? Could a "low" school actually be making greater strides with less-proficient students?
Tune in next year to find out.