Three weeks ago, Hornbeck announced that the number of students getting passing marks in mathematics, science and reading achievement tests had increased by at least five percentage points between 1996 and 1997.
Most schools increased the number of students getting passing grades over the last two years by at least five percentage points, and many had gains between 10 to 15 points, district sources said.
But experts who have followed school reform nationwide said it's best not to jump to too many conclusions, especially when the test has only been used for two years.
``It's too soon to know in Philadelphia,'' said Daniel Koretz, an education researcher with the non-profit Rand Corp. public policy think tank.
Koretz and others said it's fairly common for scores to go up when a new test is used.
Hornbeck introduced the Stanford Achievement Test last year.
``Usually after you tell people you're going to look at scores, they'll go up,'' Koretz said.
Once familiar with the test, teachers build lesson plans around the test material, he said.
Teachers have said they didn't have the chance for much preparation in 1996.
But in 1997, teachers had new district academic standards from which to teach.
They also had more time and material to prepare children for the Stanford test. The three parts of the test were spread out over a week. Teachers said district administrators began distributing material for the test about a month before it was given.
Many principals bought primers from the test company.
Robert J. Wright, a professor of educational psychology at Widener University's Center for Education, said that rising test scores could reflect smarter students.
But it is hard to gauge how accurately the achievement tests measure what students are learning overall if teachers are skewing their lesson plans to prepare the for the test, he said.
``How much time is directed to the knowledge required for the test? Where do you take the time from?'' he asked.
Mitchell Chester, the district's executive director for accountability and assessment, credited many of the individual schools' gains to better teaching methods.
``I don't think the kind of growth we've seen this year can simply be due to being more test-smart,'' he said. ``A lot of what we've seen this year reflects a much more focused, serious effort to improve the instructional programs in the schools.''
Tom Corcoran, an education researcher at the University of Pennsylvania, said evidence supports Chester's argument.
Corcoran interviewed teachers during the last school year while preparing a report for Greater Philadelphia First Corp., a business group raising funds for the school district.
Teachers prepared students for the test, which should cause scores to rise, but the district also tested more students in 1997 than in 1996. That would put a drag on overall district scores, he said.
Preparing students for a district-wide test can be good if the test itself reflects the district's curriculum, he said.
Corcoran, co-director of the Consortium for Policy Research in Education, said that he was still unsure how much of a role test preparation played in April's scores.
``If you start out low, then you can make bigger gains,'' Corcoran observed.
Getting students above the average level usually proves more difficult, he said.
A key question is how much time was spent preparing solely for the test, said Robert Meyer, a University of Chicago researcher who has analyzed data on standardized tests for the National Academy of Science.
Spending some time on test-taking skills is legitimate, he said. ``It's suspicious when they spend three weeks teaching for the test . . . they should be teaching the whole year,'' he said.
Lessons should cover the general content of the test, which is tied to the district curriculum, skills such as as writing and problem-solving skills, he said.
Meyer gave Hornbeck credit for seeking to test all children. Some districts try to eliminate failing students from taking achievement tests, to artificially boost scores, he said.
The way the district is administering the test, however, could distort the picture, Meyer said.
A more accurate measure is to give tests at the beginning and end of the school year, to see if students learn as the curriculum is upgraded, he said.