In 2012, I had recently taken a new position with Navigation North Learning Solutions, and amongst my projects, I was asked to perform some research for the Smarter Balanced & PARCC teams to perform a high-level assessment and report on the readiness of public schools across all 50 states to move to digital standardized testing. For the few states that were accustomed to conducting their annual academic assessments via laptops or tablets by use of online tests, delivering digital assessments to hundreds if not thousands of students at a given school site as coordinated across an entire state, it was an exercise in continuing what they’d been doing for any number of years. For most states however, pencil/paper ‘bubble tests’ were still the mainstay and the challenge to transition was considerable.
Contrary to what many outside of education would assume were the states that were dragging their feet in implementing more digitized, innovative assessments….it was in fact many of the smaller, largely rural states that due to their size and some other factors, had been pioneering in moving to digitized assessments. On the other hand, large states, like my home state of California (the de facto hub of modern technology) faced some of the largest obstacles in moving from paper/pencil to screen/mouse testing. Beyond determining a rotating schedule of students by grade levels across 5-10 days and cross-training educators and other staff to guide and deliver the exams, California’s classrooms were not even remotely equipped with 1:1 or even 1:50 device ratios for students in many schools. The bandwidth of available Internet at many sites was not adequate to run the assessments on 100-200 devices simultaneously. Existing devices had largely gone unserviced for more than a few years and thus lacked the internal processing speeds and updates to meet minimum requirements to load and operate the sites that delivered the tests reliably. The list of challenges was long, daunting, and expensive.
In the midst of this effort, a national testing agency ETS, asked if we’d organize a white paper of sorts around the notion that this effort might indeed make way for not just exams, but for comprehensive K-12 platforms that both delivered assessments, but also then allowed educators timely access to results of the assessments to help guide and formulate instructional responses for their students in relation to those results, perhaps guide PD opportunities based on needs surfaced across student populations in given districts, and become a conduit for new and diverse instructional resources to move through those platforms to aid educators in supporting students with more diverse, and effective materials.
I was able to lean on a few national leaders at the time by way of interviews with Douglas Levin, Steve Midgley, David Stevenson, and Prasad Ram as a means to examine their related work and experiences in this domain and seek out their thoughts on the target topics. Lucky for our industry, all of these incredible thinkers and doers are still largely at work in this space today in various forms…which is somewhat unusual given the “washout” of so many edtech entrepreneurs and pioneers from the 1990s through to today.
So why am I sharing this story again now?
Well, last week out of the blue, I was notified by a research repository service that a “key piece of research” connected to me had its long-time link abandoned by the hosting agency.
So here I am over a decade later being prompted by a “bot” to tend to my “critical contribution” (the bot’s term, not mine). So I’ve taken a bit of time out of my regular work, which still involves daily efforts in designing and creating online platforms to help educators and educational leaders collaborate and grow in ways that enrich educational opportunities for students, and empower communities, to re-connect this nostalgic piece back into the Internet.
I was able to luckily find a copy of the original publication from many years ago on a 2010 external Seagate USB drive from a drawer of an old desk at my home office, and thought it best to put together this post as a means to give it some context for some of my current (a’hem, younger) colleagues and a new place to live so I can “re-share” it back to the research repository AI agent that so kindly reached out asking for an alternative pathway to the report: