The Testing Bounce-Around on School Accountability

justin-katz-avatar-smiling

Readers may have come across news that Rhode Island students’ scores on the PARCC tests remain underwhelming.  This paragraph from Dan McGowan puts it succinctly:

The majority of public school students across Rhode Island still aren’t meeting expectations in math or English, according to the latest round of standardized test scores released Thursday by the R.I. Department of Education.

Of course, the problem is that our education bureaucrats change the test, wholesale, every time they’ve been around long enough to begin pointing toward actual conclusions about actual students.  One needn’t be but so cynical to suspect that the problem with the recently abandoned New England Common Assessment Program (NECAP) tests wasn’t pedagogical, but that they began to allow Rhode Islanders to trace students’ progress (or lack thereof) through enough years of their schooling to begin holding the system accountable.

The best way to resolve that particular institutional friction, of course, is to change the test.  That buys the system a few years of excused “adjustment” and then another four or five years during which the test are acknowledged to be measuring something, but without enough data to draw conclusions.  Then… change the test again.

To Our Readers: We need your support to challenge the progressive mainstream media narrative. Your donation helps us deliver the truth to Rhode Islanders. Please give now.

This post shouldn’t be read as an endorsement of standardized testing as an ideal mechanism for accountability.  Much preferable would be empowering parents to judge which schools will better serve their children and to direct their education resources there by some mechanism that isn’t as intrusive as changing houses.

But there has to be some way for communities to judge how well the schools in their communities are performing, and without a market dynamic, the waters are too easy to muddy.  Parents don’t want to feel as if they’ve made bad decisions for their children, and when the decision is limited to uprooting your entire life and moving, the incentive is to make the best of what you’ve got.

With that framing, the handling of standardized testing is simply an extension of the strategy.  The game is to bamboozle each generation of parents to keep the corrupt union-driven system going.

Click to help us keep the doors open.



  • Joe Smith

    Okay Justin..but if those elites are so smart..then why

    (1) Switch to a test that is easy benchmarked against the supposed # 1 state for education (Massachusetts)?

    (2) Why keep the NECAP science?

    (3) Why switch the SAT/PSAT that is now benchamrked against years (well, collegeboard changes it up but one can use concordance tables) of data and at the national level? Even against private schools (although it’s not apples to apples always).

    “empowering parents to judge” = please, as if parents look at NECAP/PARCC scores. RI is a small state, plenty of data around depending on what is important to a family. I suspect reputation and word of mouth are fairly established elements and more the starting point than looking at whether 39% versus 51% proficiency in ELA tell you anything.

    However, you have a point in that RIDE is a bit lazy in the analytics. For example, the “math” proficiency is misleading. Math at the MS/HS levels have different tests – Alg 1, Alg 2, Geometry, or “math” – the last one actually being the lowest level. What the poor scores in “math” tell you is if you child is behind by 8th or 9th grade, it’s an uphill battle to catch up.

    RIDE should be saying % kids by grade in Alg 1 and proficiency, etc. instead of having to figure out yourself how to get those.

    Or RIDE should be listing if a school is paper or computer based for its results, given the upward bias (significant) for paper takers (which are only a few but skew the results – have a low minority/low free reduced lunch and skewed gender (females test better in ELA) and voila, you’re in Dan’s 75% club!). Hmm..wouldn’t want to make our charters look underperforming now by calling the ones out that RIDE gave waivers to stay on the old format..

    What is striking – for even a few years of PARCC and NECAP – is the consistent gap between males and females in ELA since it cuts across race, income (look at Barrington, EG), school type (charters, traditional same results), and location (north/south; suburban/city).

    Hmm..where’s our Gov and Ed Commissioner on how our young boys are being left behind in English?

    • Justin Katz

      I’m not entirely sure to what extent you’re actually responding to me and to what extent you’re commenting on the general state of affairs, but I want to make something clear: When I write about “empowering parents to judge,” what I mean is that parents should have increased ability to choose their children’s schools, thus providing accountability to the system no matter what the test results (or any other metrics) say.

      And to answer your closing question, presumably the governor is too busy judging entries into her girls-only Governor for a Day contest to worry about the fact that boys are clearly being underserved by government-run schools. I raise that point most times data comes out, because it’s so obvious.

      • Joe Smith

        The extent in responding to you involves your inference the motivation behind dropping PARCC was to buy more “unaccountable” time.

        If that is the motivation, why switch to tests that in fact are accountable right away and against larger (and arguably more competitive if you believe MA is the top state for K-12 education) comparison sets? Why keep the same testing platform/implementation (RICAS and PARCC use same system)? Why with the science keep the same test (NECAP) and migrate to 5th graders from 4th so you have an immediate benchmark as many of the same students who took the 4th grade science NECAP will take it again now as 5th graders?

        Just seems if it’s some conspiracy among the public school monopolists, they would have picked different tests with no immediate benchmarking..

Quantcast