Ideally, a scoring rubric should be based on the responses of many hundreds of children who are properly prepared for the tasks. While all of these tasks have been pilot tested with children, in most cases the testing has not been sufficient to provide a solid base for a complete scoring rubric.
There is no universal agreement on how to structure scoring rubrics. Various groups who are currently active in creating alternative assessments in mathematics have used different styles and different levels of specificity (for example, four vs. six levels of gradation) for scoring rubrics.
A complete analysis of scoring rubrics would require a foray into the thorny problem of judging individual performance in group settings. Although we do intend that these prototypes will encourage teachers to use group work, we have deliberately set aside the daunting task of codifying rubrics for assigning individual grades when students work in groups.
There is continuing debate between proponents of ''holistic" and "analytic" approaches. Does one look at every isolated component of a complex response, or should one make a general, overall, judgment of the child's response? While it is important to be fairly specific about what the task is intended to elicit and about what is to be valued in children's responses, there is no compelling evidence to favor one position over the other. The protorubrics given in this book can easily be adapted to different styles.
Moreover, protorubrics are in some ways analogous to standards: they express goals, ensure quality, and promote change in assessment. Hence, protorubrics by themselves may have a unique contribution to make to assessment reform, whether or not they ever are formalized into polished rubrics.
The protorubrics in Measuring Up are structured around three levels: high, medium, and low. Rather than try to define precisely what constitutes a "high" response, the protorubrics list only selected characteristics of a high response. We leave to others the