www.ShowAndTellForParents.com
Testing        < Previous        Next >

 

 

NAEP: The National Assessment of Educational Progress

 

Q. Should we just scrap all these expensive, confusing standardized tests, and just go with one nationwide test that everybody would have to take? Then we could compare from school to school, from city to city, and from state to state.

 

We already have something like that. It's the NAEP (pronounced "nape") - the National Assessment of Educational Progress. Housed within the U.S. Department of Education, in the National Center for Education Statistics, its nickname is "The Nation's Report Card."

 

The NAEP has been around since 1969, with basically the same questions given to 4th, 8th and 12th graders over the years in order to track results over many years. Scores aren't returned for individual students or schools, but are obtainable by certain officials. Scores are published on an aggregate basis for geographic regions and student subgroups separated out by age, gender, racial background and so forth.

 

As far as evaluating student abilities in math and reading, the NAEP has been a solid, reliable tool. Despite the fact that per-pupil school spending in the United States has more than doubled in the 30+ years of NAEP, even after adjusted for inflation, NAEP test scores have remained essentially flat.

 

Another important use of NAEP data is to compare student test scores for a given state on the NAEP, to student test scores as reported from the state's own statewide standardized testing program. In a number of states, there is a huge gap between the state results and the NAEP results. The state results are almost always much, much better. That creates a credibility gap in the eyes of the public, considering the possibility that state officials "dumbed down" the statewide tests and made far more students pass, based on the scores, to try to make themselves look better on paper.

 

That kind of cross-referencing ability is considered crucial information from an accountability standpoint. So are studies that are possible with long-term NAEP data. For example, Margaret Raymond and Eric Hanushek of Stanford University have shown that states with "high-stakes" accountability sanctions on their statewide testing programs tend to have higher test scores among 4th and 8th graders on the NAEP. That suggests that high-stakes accountability measures are an effective incentive to education officials to deliver math education better, and avoid the sanctions.

 

Their Stanford colleagues Martin Carnous and Susanna Loeb showed that the tougher the accountability sanctions, the higher the NAEP scores, markedly so for black and Hispanic students. That implies that raising the bar helps the kids who need help the most.

 

Similarly, NAEP data allowed researchers Harold Wenglinsky et. al. to show that teachers with master's degrees do NOT make a difference in student NAEP scores (see p. 31 of the Conclusion) but that majoring in math or science in college does, and so do certain classroom practices such as hands-on activities. The data also showed that students did WORSE when their teachers had taken a lot of "classroom management" course work. This suggests that school districts might use their professional development budgets more wisely than paying for teachers' master's degrees in education.

 

Although it is given only a scientifically-selected spot-check basis now, to a handful of students nationwide, many observers believe that the powers that be want to do exactly as you suggest, and switch to the NAEP as "the" standardized test for all. You can tell by the eerily similar language in NAEP documentation and federal education legislation over the past few decades, Goals 2000 and No Child Left Behind, that the NAEP is slated to become the national test if circumstances can be arranged.

 

The problem with that is that it would be sooooo easy to insert a certain political point of view into the questions. Government education officials would be contracting with the company that writes the test questions, the Educational Testing Service, accountable to the National Assessment Governing Board, which is currently composed of 26 people, mostly politicians and bureaucrats, appointed by the U.S. Secretary of Education.

 

Let's say at some point in the future, the President is politically left-of-center, and Presidential appointees would naturally reflect the points of view that are normally associated with liberal political opinions. So would it be so surprising if the questions would reveal how much in agreement or disagreement the student is on political hot potatoes? The bias might or might not be deliberate, but test questions could easily be written so that, if you held the opposite point of view on some issue than the prevailing view in the government, your answer could be marked "wrong."

 

You can see what a powerful political tool that could be, to shape public opinion, keep local and state education officials in line, and act as a gatekeeper for everything from grants to government sanctions.

 

Then there's the whole area of data collection and data mining with the NAEP. Since a lot of demographic-type and rather personal questions are asked of the students in order to group their responses with others similarly situated, the NAEP does contain individual student data down to the microrecord level. If the questions become more and more politicized, and the results can be stored and exchanged, who knows what use that data could be put to in the future . . . that parents and students won't like, but can't do anything about.

 

           

Homework: In fact, it's already happening. Read more about calls to turn the NAEP into more of a tool for assessing of attitudes, values, opinions and beliefs vs. testing of academic content on:

 

http://eagleforum.org/educate/2008/apr08/NAEP.html

 

 

By Susan Darst Williams www.ShowandTellforParents.com Testing 08 © 2008

Testing        < Previous        Next >

Show and Tell for Parents - Educational Advice Columns