Sunday, November 29, 2015

A Different Way of Thinking about High School Final Exams

I am perusing a high school's final exam schedule, the one I created months ago. In my current post as High School Assistant Principal, it is my job to engineer the final exam schedule, checking and double-checking for accuracy and then posting the result to anxiously awaiting students and their families via a shared Google doc. This process is accomplished several months in advance of the actual exams, so the December exam schedule is done and dusted (and posted) in October.

To put things in perspective for you, exams at my school stretch out over a four-day period. On each day, two multiple-hour exams are given: one session lasting from 9-11 AM and the next session held from 1-3 PM. During "exam week," the regular school schedule is suspended; only exams take place during the week. Once a student is finished with an exam on a particular day, he or she can return home. Woohoo.

It's my job to put the schedule together, and it is a tedious affair. Courses, sections, section sizes, physical spaces, rooms, desks and chairs, and proctor availability all must be taken into consideration. Each course is given a specific slot in the overall exam schedule, each course being further divided into several sections of different groups of students. Scheduling all sections of a course to have an exam on a particular day and time saves the teacher from having to create several different versions of his or her exam, but it also unfortunately makes the creation of a separate examination schedule a complex necessity. The result is a sizable matrix, a jumble of teacher names, room numbers, class sizes and proctor names.

As I am perusing the schedule for the 90-millionth time, quietly fretting about individual student conflicts and possible errors, I am becoming despondent. In just a few days, several hundred of our students will sit down for a relatively stress-filled, two hour period to demonstrate what they know, what they have learned. At my school and hundreds of other secondary schools, I imagine that for the most part, students are sitting down to a teacher-made booklet of questions: some multiple choice, fill-in, and short answer. Some exams will include a short essay component. A handful of exams will be entirely essay based. With expectations that exam results are to be posted before students and teachers depart for winter break, the schedule is probably tight and does not leave room for much other than an SAT-style of exam. So students demonstrate their learning in a two-hour session on a piece of paper mostly composed of objective questions. I am thinking that there must be a better way.

There is.

First, let's dispense with the traditional end-of-semester timetable for examinations. Like the structure of the rest of the academic year, the end-of-term exam schedule is a throw-back to the late 1800s. That calendar structure was itself created by social reformers trying to find a compromise between the agrarian calendar that was shaping most peoples' lives a hundred years ago, and the emerging urban work calendar that was shaping the lives of a growing number of city denizens. With the vast majority of today's students not necessarily needing to return home to help out on the farm, surely it is time we revisit the notion that exams must be given before plantings and harvests.

I'd like to suggest another reason for throwing out the end-of-term examination schedule: the nature of learning itself. By creating an exam schedule divided into two-hour blocks, we are essentially telling students that they damn well better demonstrate their learning in this two-hour session, otherwise we will assume that learning has not really taken place (and we will the fail them). Current exam schedules favor students with faster processing speeds and stronger memories. Are these "quick" students smarter than slower students with weaker memories? Not necessarily. But continuing to pursue an end-of-term exam schedule, we stack the exam deck against certain kinds of students and thus garner skewed results about how much learning our students have accomplished. I bet you we under-estimate learning and damage certain students' self-esteem at the same time. Bravo, us.

Secondly, let's start (or continue) having widespread discussions about what constitutes a quintessential summative assessment. (For my non-teacher readers, a summative assessment is typically a large-scale assessment given at the end of a unit, project, course, year, etc. to determine learning and achievement). How many of us would say, "Yes! A quintessentially summative assessment for my discipline is a multiple choice test"? Anyone out there want to own up to proudly proclaiming something like this? God, I hope not. And yet that is exactly the unspoken claim many of us silently make when we create end-of-term assessments made up of mostly objective questions. "Here, take this 100-question multi-choice test, it's the best instrument I've got to determine achievement. Show me what you have learned." Excrement; pure excrement.

When we are having these strategic discussions about quality summative assessments, I would like for these discussions to be truly widespread among the teachers at my school. And while I am wishing, I would also like for each teacher to really think about this and to come up with examples and answers. Here! I'll start the ball rolling. I taught high school Economics for years. If I think about it, the quintiessential assessment for a student in one of my courses should probably be some kind of written economic analysis, full of graphs, data, and reasoned arguments from multiple perspectives. Before delivering such a summative assessment, I would need to scaffold some of the skills required for students to be able to successfully complete such a task. I would have to first define what I meant by an economic analysis, and I would then have to equip students with some of the tools used to deliver such an analysis. I would have to show my students an exemplary specimen of such an analysis, explaining to my students, probably via a rubric, why the specimen is exemplary. Maybe students would need to see multiple examples! Then students would need to practice and get feedback. And then practice again, and maybe again. Then they would be ready, although some may need more time (back to my original point about the exam schedule). In the end I'd have a tool for determining achievement that does me and my subject area proud. Incidentally, I would also be setting the bar for student achievement and perseverance pretty high. If I know my students, almost all would try to live up to those high standards of achievement and perseverance.

What would constitute quintessential assessments in other disciplines? I am not a scientist, but in a science class, I should think that some sort of hypothesis-testing, lab, or lab report would be strong candidates. How about in a literature or English class? Again, not my professional bailiwick, but some form of rough-revision-writing should be considered. In a secondary language class? Perhaps an assessed oral conversation coupled with a written piece. Design Technology? A finished product maybe, or a detailed, written production analysis. Again, these disciplines are not my field; I am simply brainstorming, getting the ball rolling. These discussions and decisions are ultimately up to the professionals in the classroom. The result however, is the same as I shared above: a summative assessment tool or model that allows most if not all students the latitude and creative room to demonstrate their level of achievement. And the tool or model used is a proud reflection of our own professional interests and passions.

If we adopted such a model, then our "exam calendar," if you will, would actually be the school year itself, not some artificial, anachronistic, century-old carry-over. If adopted, such a model would help us teachers to promote the idea of summative testing being "knowing and doing" instead of simply knowing within a certain, specified time frame that may or may not be realistic per student. If adopted, such a model would, just maybe, leave students and teachers feeling less like they were jumping through hoops, especially towards the end of a term.

Now I know what some of you might be thinking. You are thinking that in the real world, the SAT is king, and if we don't prepare students for something like an SAT, then we are not preparing them for college and/or life later on. Let me reject that line of thinking with a question: when in your professional life was the last time that you took a multiple-choice test as a part of your job?

Thanks for reading. -Kyle