Friday, April 15, 2016

School Improvement: Top-5 Impediments to Change

© Justin Smith / Wikimedia Commons, CC-By-SA-3.0
In my previous post, I examined five general educational programs that schools could implement in order to enhance student academic achievement. This month, I am spending some time considering the obstacles preventing the implementation of said educational programs. Here's my Top-5 list of impediments to educational reform (in no particular order).
  • Non-educators deeply involved with educational policy and long-range planning. From career politicians at the state and federal level deciding upon policy and crazy soccer-parents being elected to local school boards to business leaders being co-opted to sit on private school boards, educational policy and long-range planning is largely decided upon by non-educators. To me that makes about as much sense as insurance companies deciding upon my long-term medical care. Now I will say that school boards and legislative committees do solicit career educators for information, but more often than not educators with extensive primary and secondary experience are not a part of the decision-making process. Successful schools should (and do) include career educators in the decision-making process or such schools have a decision-making apparatus not aligned with a corporate structure (some independent schools like this do exist). I will also say that we as a society should examine ways to mitigate the impact of career politicians on educational policy. Career politicians, by and large, are primarily motivated by orchestrating successful re-elections, not by improving schools. Educational policy should be decided upon by people genuinely interested in kids and learning.
  • The march towards greater standardization. There are three main forces shaping the push for more and more standardization: a) businesses and post-secondary educational institutions wanting qualified individuals with whom to work, b) the public at large demanding greater accountability vis-a-vis public funds, and c) inexperienced educators in the classroom (either they are young or they are working outside of their area of expertise). Don't get me wrong, I understand these three forces. Heck, to some degree I even agree with the first two points. What college or firm wouldn't want a promise from secondary schools to flood the market with excellent students? What taxpayer doesn't want accountability? I get it. I really do. I simply disagree with the methods we use to accomplish the outcomes. To the first point, I argue that we should reject "cookie-cutter" curricula and mountains of standardized testing, opting instead for finding ways to get a greater amount of expertise into our classrooms and administrative offices. I want passionate physicists teaching our physics classes, gifted and communicative writers teaching our literature classes, and dedicated elementary level and middle level educators teaching our younger students! If we were able to transform our schools into places where students engage in meaningful work along with experienced professionals, I believe that we would furnish the post-secondary world and the business world with passionate, highly able students AND we would satisfy the vast majority of our taxpayers demanding more accountability AND we would have more experience in the classroom.
  • Poverty and low family incomes. According to the National Center for Children in Poverty, 22% of all children in the U.S. come from families living below the federal poverty level. Another 23% of U.S. children come from families defined as "low income." Astonishingly, this means that almost half of the children in the U.S. are living below or just above the federal poverty level! If we are serious about addressing the problems in schools, we have to get serious about addressing the needs of impoverished and low-income Americans. Indeed, if we begin to remedy the income problem, some of the educational problems will take care of themselves. An added note here and career politicians, please take heed: WE NEED TO STOP USING SCHOOLS TO SOLVE THE INCOME PROBLEM!
  • Schools simply doing too much and parents believing that it is good for their kids to "do it all." These are two different sides of the same coin. Overtaxed schools may be overtaxed because resources are being devoted to solve the income problem (see above) and/or because constituent parents want the school to provide their children with a range of opportunities that they are either unwilling or unable to provide (lunch and dinner, extensive counseling, daycare, play groups, after-school activities, trips, sports, music instruction, special-needs classes, advanced courses, etc.). Whatever the reasons, schools in general tend to be stretched thin, offering a huge variety of day, afternoon, and sometimes evening programming in addition to curricular instruction. Institutions stretched too thin lose focus ... period. Schools simply cannot effectively address multiple mandates. Successful schools have specific areas of focus (my own preference would be the 4 A's - "academics, athletics, arts, and advisory") and these successful schools carefully maintain and preserve their focus areas.
  • Lack of training and/or targeted professional development. Teachers emerge from all kinds of teacher training programs, some good, some not so good (I'm talking about the programs, not necessarily the teachers). Having worked with hundred of teachers throughout my career, I can think of several individuals who are probably born teachers. These folks seem to innately know how to talk to each and every student about his or her upcoming assessments, and they provide specific, rich feedback to students as easily as they draw breath. Then there are other teachers who are perhaps not so naturally blessed with the traits necessary to being an effective educator, but who have nonetheless graduated from solid teacher training programs and have learned the art of effective instruction. Then there are still other teachers who would be quite surprised by and or defiant towards John Hattie's research findings, the findings I wrote about last month. These are the teachers who lack the training necessary to be able to forge meaningful relationships with students, to talk with students about their academic progress, and to be able to differentiate instruction and assessment to make the most efficient use of instructional time possible. They can learn, though! But only if schools are willing and able to provide (hopefully in house) effective professional development opportunities for these teachers, opportunities that specifically focus on how to deepen relationships with students and how to make the classroom a more efficient teaching and learning environment.
So much for what I think are the main impediments to educational reform. What do you think? Are there major obstacles to educational reform that I have missed? I would love to hear from you in the comments section.

Next week I will focus on what we as individual teachers and administrators can do to overcome some of these impediments.

Thanks for reading! -Kyle

Monday, February 29, 2016

What Matters in Schools?

February. That horrid month in the lives of all teachers everywhere. Don't ask me why, but the things that CAN go wrong in schools almost always DO go wrong in the month of February. From spikes in discipline-related issues to leaky gym ceilings ruining new basketball floorings, school stuff goes belly-up in February.

For me this February has been no different than any other dreadful February. Instead of being able to work on "things that matter," I have instead been impelled to address a host of issues at school that have little to no bearing on student performance. Today however, I am grateful for at least two aspects of February. The first is that February is a short month and is almost, mercifully, at an end. The second is that February 2016 brings us an extra day. And since I have vowed to publish a blog post every month this year, I have needed an extra day this month to accomplish my blogging goal. So thanks, February ... but you still suck.

On this auspicious, final day of February, I am turning my back on the dreary month that was, returning to a meditation on "things that matter." In previous writings, I have mentioned the work of John Hattie, an Australian professor of educational research and Director of Melbourne Educational Research Institute. Hattie has spent his entire career looking into "things that matter," scholastic programs designed and implemented to improve academic achievement. In 2009, Hattie published Visible Learning, a synthesis of his study of 800+ meta-studies on school programs designed to improve learning and achievement. In essence what Hattie has done is to analyze thousands upon thousands of educational studies in thousands of schools with thousands of teachers working in thousands and thousands of programs. Given the complexity of the research, his initial question is quite simple ... what works? In Visible Learning Hattie answers that question. And it turns out, surprisingly, that almost everything works; almost all the school programs that Hattie has studied, programs launched to enhance academic achievement, actually work. Given that almost everything works, Hattie then goes on to ask another relatively simple question ... how well? Hattie's book also answers this question. And the answers are fascinating.

By analyzing studies that target educational programmatic impact in terms of students' grades before and after program implementation, Hattie is able to generate a coefficient (a number without a corresponding unit of value) in conjunction with every program launched. The greater the value of the coefficient, the greater impact that program has on student achievement based on the thousands and thousands of studies. You may not understand all of the kinds of programs launched, but take a look (you will need to scroll down a little once the new window opens). What you are seeing is a list many of the kinds of programs we tend to launch in schools in order to enhance student achievement ranked in order of "most effective" to "least effective." Cool, eh?

Given that the average of the program coefficients is equal to 0.4, I want to look towards the top of the list if I am meditating on "things that matter." I want to ignore the things that I used to think mattered like "teacher subject matter knowledge" (coefficient = .09), "extra-curricular programs" (coefficient = .17), "class size" (coefficient = .21), "teaching test taking" (coefficient = .22), "homework" (coefficient = .29), and even "decreasing disruptive behavior" (coefficient = .34). These things work, they kinda matter, but given that they are all programs that lie below the average, they just don't deliver the bang-for-the-buck that other programs can deliver. If I want to ponder the "things that matter," I want the top of Hattie's list, the programmatic high-flyers. The school initiatives that go BOOM.

Let's look at the top five.

  1. Student Self-Reported Grades (coefficient = 1.44)
  2. Piagetian Programs (coefficient = 1.28)
  3. Providing Formative Evaluation (coefficient = 0.9)
  4. Micro-teaching (coefficient = 0.88)
  5. Acceleration (coefficient = 0.88)

When a school begins a program centered around student-self reported grades, it means that the school promotes a program where a teacher takes the time to discuss with a student about his/her targets/goals for upcoming assessments. After the assessments, that teacher then has a follow-up discussion with that student about his/her actual performance compared to the predicted/desired performance. The teacher and the student then debrief. Simple, huh?

Piagetian programs are those instructional programs that embrace the ideas of educational philosopher Jean Piaget. Long story short, Piaget basically categorized four different levels of thinking and ideation based on age. Certain aged students are capable of certain levels of thought and ideation and are not capable of more advanced levels of thought and ideation. The lesson here is for schools and teachers to understand the thinking of the students they serve and not to impose adult modes of thought and ideation onto younger students. Again, simple.

There are two types of what Hattie calls "evaluation" but what I will call assessment: formative and summative. Formative assessment happens when a student is practicing. Summative assessment happens when a student is producing or performing for real. To use a basketball analogy, formative assessment happens in practice with drills and scrimmages (and coaches yelling), while summative assessment happens in games with scores and statistics (and coaches yelling). In the classroom, formative assessment happens whenever a teacher provides a student with some form of relatively informal feedback (verbal, written, thumbs-up-thumbs-down, exit tickets, etc.), while summative assessment happens with unit tests, semester tests, and end-of-year tests. Hattie's research suggests that schools should adopt and support programs that focus on formative assessment. Simple, but openly defying every dumbass politician who has ever supported any kind of end-of-year testing.

Micro-teaching is the recording of a lesson by a peer, instructional coach or mentor. The teacher being recorded then discusses his or her performance with the peer, instructional coach, or mentor. Hattie's research strongly suggests that schools implementing these very "teacherly" programs tend to see big gains in student achievement. Simple, and very cool. Also surprising, at least to me.

The final big-hitter is acceleration, that is, allowing faster and more able students to learn at their own accelerated pace. What Hattie finds is that almost all students benefit from these kinds of programs. We can understand that such a program is probably beneficial to academic high-flyers, right? But how does this kind of program benefit almost all students? Hattie explains that accelerated students don't learn in a vacuum. Rather they learn the powerful social setting of the classroom, and these accelerated learners have a knock-on effect on all other students within the classroom setting. Again drawing on a basketball analogy. Say one of my players, Ricky, goes to a superstar summer basketball camp for a month. He is being accelerated. He comes back to my practice a month later having been playing with all those superstars. Some of his new-found skills begin to rub off on my other players during subsequent practice sessions. Ricky becomes a better player and so too do almost all of my other players. Again, simple.

So much for the top-five programs that matter according to Hattie. So why in the hell are we spending so much time on the programs that don't matter much to the detriment and exclusion of the programs that do matter?

I will get to that in my next post.

Thanks for reading. -Kyle

Thursday, January 21, 2016

A Quick Guide for Differentiation

My teachers and I have been spending quite a bit of time recently, discussing "differentiation." Not surprisingly, we are finding that this umbrella concept has different meanings for different teachers. I thought about the degree to which I differentiated instruction and assessment during my career in the classroom. The following brief list outlines my own experience with differentiation, something I offered to my teachers to keep the discussion going. Your thoughts and comments on the list's highlights and "lowlights" are appreciated.


Begin with a mindset
  • every student has a unique mind (genetics, upbringing, processing speed, perception, etc.)
  • every student is unique in terms of learning process, each student endowed with relative learning strengths and learning weaknesses
  • students master content and skills at different rates
  • not all students have to have the same number of assessments (formative and summative)
  • not all students have to have even the same type of assessments
  • embrace the following challenge: accommodate for slow, fast, weak, strong learners in each class

Plan with differentiation in mind
  • a yearly game plan is (I believe) a “must” (example here and here - two sequential units completely planned … you can explore the site - the entire year is completely planned)
  • plan all summative assessments well in advance, working with an LSS Specialist to modify up or down depending (see links above and then scroll down to see all formative and summative assignments for the entire year linked and downloadable)
  • plan to allow students to work through units at their own pace
  • plan for small amounts (5-20 minutes) of common and direct instructional time in class and out; plan for both heterogenous and homogenous grouping sessions
  • plan for active learning in the classroom (for many of us HS teachers, “homework” should probably be done in class and “teacher-talk sessions” could/should be done outside of class via screencasts/videos/readings
  • plan to allow students to reflect on their learning (oral, written, or both); Hattie’s research indicates that this strategy is among the most impactful in terms of supporting all learners

Use differentiated strategies to instruct
  • homogeneous groups when you have “faster” groups and “slower” groups, especially when slower groups need more time and attention; this allows for faster groups to move ahead
  • heterogeneous groups to allow for students to teach other students and when a learning activity has room for multiple roles within a group
  • allow for student choice of material to reflect student background and interest (when appropriate)
  • give tons of feedback, verbal and written
  • try digital discussions to allow for students who are either quiet or not language proficient
  • encourage note-taking and encourage students being meta-cognitive with their note-taking!

Use differentiated strategies to assess

  • plan all assessments and rubrics well in advance (yearly is best)
  • at the beginning of each unit, pre-assess students and have each student establish his/her own learning target(s) or set of learning goals
  • work with an LSS Specialist to modify summative assessments (up or down)
  • have multiple summative assessments ready for use at the end of an instructional unit
  • allow students to either retest or make corrections (ie. learn from previous errors)
  • post exemplars and discuss with students
  • allow students the latitude to reflect upon their summative assessments

Wednesday, December 16, 2015

"To the International Baccalaureate ... And Beyond!"

I'm in a meeting, talking with veteran independent school teacher, Art L., and he is getting what we would call in Memphis, "all kinds of fired up." We are discussing the International Baccalaureate's (IB) rubric for teachers in the IB English Year 1 course, the rubric all IB English teachers must use to assess students' "Individual Oral Presentations" (IOPs). IB courses in English Literature are among the most advanced Lit courses offered at most IB schools and the IOP is one of those essential summative assessments in the rigorous IB English course. Art is less than impressed by the IB-developed rubric.

"Nowhere in this rubric are standards or criteria for students making a cohesive, logical argument," he laments. "It's possible for a student to simply be familiar with the text he's analyzing, to make good eye contact throughout the presentation, and to use specific terminology, which could have been blindly memorized; he could earn a perfect score by just doing those three things!"

Art is correct and his antipathy seems well justified. With only three standards to be assessed, the IOP rubric is overly simplistic for a summative assessment. The achievement bar, which we would expect to be set fairly high for an IB course, appears to be set at a baseline level in this case.

Art's eyes twinkle as we begin considering tweaking the IB-sanctioned rubric to increase the level of expectation and achievement. We are now talking about the sacred realm beyond the almighty IB, and we are both all kinds of fired up.

For my readers who don't know, the IB is a currculum originally designed in the late 60s for international schools, schools with very transient student and faculty populations. The idea behind the early IB was to create a static set of rigorous courses all with a set framework of peer-moderated assessments so that a student transferring from one international school to another international school could conceivably continue his or her studies. The framework and peer-moderation engendered a consistent set of courses that could be taught at any school. The "IB Diploma Programme," a two-year program of study designed for 16-19 year olds, also included a component for community service, activity within the school, and an emphasis on the pursuit of creative endeavors. Additionally, the Programme required participation in a type of philosophy course and also required all IB students to write an original research paper. The idea worked, and many international schools "adopted the IB program." Students who graduated with their "IB Diploma" reported being very well prepared for university. That success made transient parents of IB students, already happy that their children could continue their studies while moving from school to school, ecstatic. Because of its success and its relatively high level of academic rigor, the IB has become a kind of gold standard among internationally minded schools worldwide. Today, thousands of schools offer the IB, which has also expanded to offer separate programs for middle schools and even elementary schools. In my opinion, the IB is a strong, rigorous, and potentially very rewarding program.

But as Art and I are talking on this warm, sunny December day, we are agreeing that the "programme" ain't perfect.

For starters, IB teachers can sometimes become slaves to the structures of IB examinations and the banks of past IB examinations and papers. Through the years that I have been a part of teaching the IB, I have seen this tendency to "teach to the test" grow quite strong. In defense of the IB program, I have to admit that the structures of the various IB written assessments tend to be academically beefy, but as Art and I are finding, there are some exceptions in every IB discipline; the IOP discussed above being just one example. Teaching to the test can be a powerful experience if the end assessment is an excellent evaluative tool. If it isn't ... well ...

As IB teachers tend more towards teaching to the test, IB students become more test-obsessed. In my own IB classes, the question "Will this be on the test?" became an all too common refrain. Many times, I answered, "Yes! This content is from such-such location in the curriculum and syllabus guide." But sometimes I answered, "No, but this content will help you to better understand such-and-such topic that could be on the exam. Trust me." Despite my pleas, students tended to pay attention in the former case and to doze off in the latter case.

Each IB curriculum goes through a curriculum review cycle ever few years. The cycle is a proper evaluation in that during a given review cycle, each IB program is assessed for needs, design, content, implementation, and outcomes as measured against yearly examination results and feedback from current teachers. The idea of conducting a regular review cycle is fantastic, making for a dynamic curriculum. But sometimes programs that "ain't broke" get "fixed" nonetheless. Consider that in the past few years, the IB Design Technology program has de-emphasized providing students with opportunities to actually create stuff. The IB Theater program has come to de-emphasize providing students with opportunities to actually perform stuff. And as Art's experience above illustrates, the IB English program may be de-emphasizing providing students with the opportunity to make logical sense of stuff.

During my later years as an IB Economics teacher, I tried to solve some of these problems by developing instructional units tightly based on the IB syllabus, but in some cases going beyond it. I would not add additional units to those suggested by the IB Economics syllabus, but I would augment each unit with additional lessons, some designed for struggling students, some designed for accelerated students. As an example, the IB Economics syllabus does not ask students to derive a demand curve. Such a derivation was a part of my course, and I found that with my weaker students, deriving a demand curve helped them to better conceptualize what 'demand' really is. For more able students, my course allowed students to opportunity to delve into 'supply' and 'sustainability.' The IB syllabus does not specify that teachers prompt students to consider the linkages between these two concepts, but I thought it both relevant and important to spend some time considering the limit of such linkages. Given my students' IB exam averages, they were certainly not hurt by my course's time allocations.

I wonder how many IB teachers make similar adjustments? Based on my current conversation with Art, I know another teacher that is heading down that path.

Later that day, second-year IB students pops his head into my office and tentatively asks me if I can answer an Econ question for him. I am an administrator and have not taught Econ in a few years, but many of my school's students have found my YouTube website, one devoted to helping folks to better understand the intricacies of both the IB and the AP Economics syllabi. I tell the student to come on in. His question is about the role of the central bank in the economy (something specified on his IB Ecnomics syllabus). He wants to know about the central bank and interest rates. I ask him if he has studied the money market and the loanable funds market (two topics not specified on the IB Economics syallabus but that were a part of my old Econ course). He responds negatively. We spend the next fifteen minutes discussing both markets and how they relate to central banks and to almost every other bank in most economies. He is a good student and he has studied other markets to the extent that his is able to catch on to the mechanics of these two markets. I see the light bulb go on as he is able to easily answer his initial question now. He feels great, leaving my office with a greater degree of confidence.

Looking back, he smiles and asks, "Why aren't these markets on our syllabus?"

Good question.

Thanks for reading. -Kyle

Sunday, November 29, 2015

A Different Way of Thinking about High School Final Exams

I am perusing a high school's final exam schedule, the one I created months ago. In my current post as High School Assistant Principal, it is my job to engineer the final exam schedule, checking and double-checking for accuracy and then posting the result to anxiously awaiting students and their families via a shared Google doc. This process is accomplished several months in advance of the actual exams, so the December exam schedule is done and dusted (and posted) in October.

To put things in perspective for you, exams at my school stretch out over a four-day period. On each day, two multiple-hour exams are given: one session lasting from 9-11 AM and the next session held from 1-3 PM. During "exam week," the regular school schedule is suspended; only exams take place during the week. Once a student is finished with an exam on a particular day, he or she can return home. Woohoo.

It's my job to put the schedule together, and it is a tedious affair. Courses, sections, section sizes, physical spaces, rooms, desks and chairs, and proctor availability all must be taken into consideration. Each course is given a specific slot in the overall exam schedule, each course being further divided into several sections of different groups of students. Scheduling all sections of a course to have an exam on a particular day and time saves the teacher from having to create several different versions of his or her exam, but it also unfortunately makes the creation of a separate examination schedule a complex necessity. The result is a sizable matrix, a jumble of teacher names, room numbers, class sizes and proctor names.

As I am perusing the schedule for the 90-millionth time, quietly fretting about individual student conflicts and possible errors, I am becoming despondent. In just a few days, several hundred of our students will sit down for a relatively stress-filled, two hour period to demonstrate what they know, what they have learned. At my school and hundreds of other secondary schools, I imagine that for the most part, students are sitting down to a teacher-made booklet of questions: some multiple choice, fill-in, and short answer. Some exams will include a short essay component. A handful of exams will be entirely essay based. With expectations that exam results are to be posted before students and teachers depart for winter break, the schedule is probably tight and does not leave room for much other than an SAT-style of exam. So students demonstrate their learning in a two-hour session on a piece of paper mostly composed of objective questions. I am thinking that there must be a better way.

There is.

First, let's dispense with the traditional end-of-semester timetable for examinations. Like the structure of the rest of the academic year, the end-of-term exam schedule is a throw-back to the late 1800s. That calendar structure was itself created by social reformers trying to find a compromise between the agrarian calendar that was shaping most peoples' lives a hundred years ago, and the emerging urban work calendar that was shaping the lives of a growing number of city denizens. With the vast majority of today's students not necessarily needing to return home to help out on the farm, surely it is time we revisit the notion that exams must be given before plantings and harvests.

I'd like to suggest another reason for throwing out the end-of-term examination schedule: the nature of learning itself. By creating an exam schedule divided into two-hour blocks, we are essentially telling students that they damn well better demonstrate their learning in this two-hour session, otherwise we will assume that learning has not really taken place (and we will the fail them). Current exam schedules favor students with faster processing speeds and stronger memories. Are these "quick" students smarter than slower students with weaker memories? Not necessarily. But continuing to pursue an end-of-term exam schedule, we stack the exam deck against certain kinds of students and thus garner skewed results about how much learning our students have accomplished. I bet you we under-estimate learning and damage certain students' self-esteem at the same time. Bravo, us.

Secondly, let's start (or continue) having widespread discussions about what constitutes a quintessential summative assessment. (For my non-teacher readers, a summative assessment is typically a large-scale assessment given at the end of a unit, project, course, year, etc. to determine learning and achievement). How many of us would say, "Yes! A quintessentially summative assessment for my discipline is a multiple choice test"? Anyone out there want to own up to proudly proclaiming something like this? God, I hope not. And yet that is exactly the unspoken claim many of us silently make when we create end-of-term assessments made up of mostly objective questions. "Here, take this 100-question multi-choice test, it's the best instrument I've got to determine achievement. Show me what you have learned." Excrement; pure excrement.

When we are having these strategic discussions about quality summative assessments, I would like for these discussions to be truly widespread among the teachers at my school. And while I am wishing, I would also like for each teacher to really think about this and to come up with examples and answers. Here! I'll start the ball rolling. I taught high school Economics for years. If I think about it, the quintiessential assessment for a student in one of my courses should probably be some kind of written economic analysis, full of graphs, data, and reasoned arguments from multiple perspectives. Before delivering such a summative assessment, I would need to scaffold some of the skills required for students to be able to successfully complete such a task. I would have to first define what I meant by an economic analysis, and I would then have to equip students with some of the tools used to deliver such an analysis. I would have to show my students an exemplary specimen of such an analysis, explaining to my students, probably via a rubric, why the specimen is exemplary. Maybe students would need to see multiple examples! Then students would need to practice and get feedback. And then practice again, and maybe again. Then they would be ready, although some may need more time (back to my original point about the exam schedule). In the end I'd have a tool for determining achievement that does me and my subject area proud. Incidentally, I would also be setting the bar for student achievement and perseverance pretty high. If I know my students, almost all would try to live up to those high standards of achievement and perseverance.

What would constitute quintessential assessments in other disciplines? I am not a scientist, but in a science class, I should think that some sort of hypothesis-testing, lab, or lab report would be strong candidates. How about in a literature or English class? Again, not my professional bailiwick, but some form of rough-revision-writing should be considered. In a secondary language class? Perhaps an assessed oral conversation coupled with a written piece. Design Technology? A finished product maybe, or a detailed, written production analysis. Again, these disciplines are not my field; I am simply brainstorming, getting the ball rolling. These discussions and decisions are ultimately up to the professionals in the classroom. The result however, is the same as I shared above: a summative assessment tool or model that allows most if not all students the latitude and creative room to demonstrate their level of achievement. And the tool or model used is a proud reflection of our own professional interests and passions.

If we adopted such a model, then our "exam calendar," if you will, would actually be the school year itself, not some artificial, anachronistic, century-old carry-over. If adopted, such a model would help us teachers to promote the idea of summative testing being "knowing and doing" instead of simply knowing within a certain, specified time frame that may or may not be realistic per student. If adopted, such a model would, just maybe, leave students and teachers feeling less like they were jumping through hoops, especially towards the end of a term.

Now I know what some of you might be thinking. You are thinking that in the real world, the SAT is king, and if we don't prepare students for something like an SAT, then we are not preparing them for college and/or life later on. Let me reject that line of thinking with a question: when in your professional life was the last time that you took a multiple-choice test as a part of your job?

Thanks for reading. -Kyle