Assessments such as Writing Solutions allow high school students to improve their writing and college readiness skills.
How Do Online Assessments Support Student Learning
Editor’s Note: Alexander Schmitz is a technology and nonprofit consultant and a blogger. He writes about business, education, and is the founder of the skills school, The Courageous and the Conscientious.
When I read about the poor scores that nearly one-third of high school graduates earn on multiple assessment tests — including the ACT, the SAT, and a handful of other tests — I want to shake my head.
At the same time, I believe that this moment provides an opportunity for us all.
Don’t get me wrong. I’m glad that children do well on these tests. On top of that, a study in Math Psychology this year found that math-challenged students who take these tests enjoy higher grades. But this study likely overestimates the impact of scores, because it looks at just one year of math practice on one important subject: math. Without understanding each of the tests, that’s less than a rigorous diagnostic.
Now, while I don’t think the failure of so many graduates is due to lower test scores alone, I do believe that these scores can also have a profound effect on our schools.
And so in 2014, I did a unique experiment. Why use standardized testing to evaluate teachers and schools? Why not create a special test that uses innovative technology to transform and boost student learning?
The first challenge was developing a test that would actually predict how students would do on the actual high school graduation exams? The key was doing this by taking the test for students who are completely unaccustomed to taking tests and then evaluating that performance based on a free online technology app called Clever, which can be used to essentially clone the test for the unfamiliar.
After using this test and other assessments through Clever, I decided to ask more fundamental questions about whether the outcomes that students have earned from those tests actually apply to their future, how to use technologies to create tests to improve student learning, and how students can create their own tests.
So as a way to ask these questions, I conducted an experiment on more than 7,500 high school students, testing their achievement at two levels: low-quality and higher-quality. Each level required 5th-grade level test scores. My conclusion: While students in low-quality tests would find it hard to improve their scores, students in higher-quality tests would actually improve their scores.
This final test results have already received close attention. Robert White, former president of Princeton Review, has called this “the singular achievement of the 21st century,” and looks forward to the applications of this kind of technology in other areas.
A summary of the findings also made waves when it was published in The Atlantic and on Twitter.
And then there’s President Trump’s attention to this subject. At last week’s State of the Union address, he called on Congress to “provide every student in America access to the world-class education that they deserve.”
So I hope that other factors are revealed from these results that can transform our nation’s high schools and classrooms as we go about reform.
The student survey
Not everyone is convinced by this, though. Advocates for improving high school graduation rates often promote selective demographics. I’m looking at you, “No Child Left Behind,” and your “Evaluating the Performance of Teachers” statement. (And I do appreciate that sometimes there are no common measurements.) In other words, people think that parents should choose their schools based on other factors, while valuing school status.
A consortium of advocates for students, educators, and scholars submitted a report in 2017, “Making Long-Term Change in Higher Education and Teachers,” asking, “What standardizes learning for teachers and students?” They chose 18 characteristics of good teaching that rely heavily on the four “key discovery” dimensions of the Educational Testing Service (ETS) and the National Assessment of Educational Progress (NAEP).
They then took a survey on four standardized tests: algebra, IGCSE, the ACT, and the NAEP. When they compared the responses of the 17,000 respondents with each test (they asked over 20,000 questions), they found that while “most participants believe that the performance of high school students was highly influenced by the factors they chose to focus on, teachers and students also use the factors they don’t focus on.”
One notable result was that students who were admitted to “selective schools” answered the test more similarly to their classmates from “nonselective schools” than they did to the nonselective schools.
I hope that readers of this article will benefit from these findings. More broadly, I hope they will look at the learning environment at large.