Pinned Post

The First Rule of Thoughtful Learning

The first rule of thoughtful learning as I see it is that, short of abuse, pretty much any pedagical technique is sometimes appropriate. The...

Tuesday, February 13, 2018

A Modeler Thinks About Assessment

Originally published on Perceiving Wholes.

As a lecturer and DBER fellow, I hear a lot about assessment. Instructors are told that they must align what they teach with the questions that are going to be on their exams, that students shouldn't encounter a question type on an exam that they haven't previously learned how to answer.

I think this is profoundly wrong for one simple reason: assessment is a modeling problem.


Think of your knowledge of some topic -- evolution or cell metabolism or ordinary differential equations --  as a network of related concepts, facts and techniques in your mind. The beginner's network might be missing some important connections and contain extraneous, misleading ones. The expert's network is rich but well-organized.

In assessment that goes beyond simple factual questions like "what are mitochondria?", we are implicitly trying to find out whether a student's network is more like the one on the left or the one on the right. The more expert-like the network, the better the student understands our subject.

Of course, we can't observe this network directly. To a teacher, a student's mind is a black box. Therefore, we poke and prod the black box by asking the student questions and use the answers to build our own models of the student's knowledge network. Particularly valuable are those questions whose answers are easy to figure out if subject matter knowledge is complete and well-organized but difficult or impossible otherwise. If a student answers these types of questions correctly, they probably understand the subject well.

Unless, of course, the student has explicitly learned how to answer the question that you asked without figuring it out. Then the process is short-circuited and we are left without a way of assessing what a student actually understands. Ben Orlin writes about this in Math with Bad Drawings:

Need to prove these triangles are congruent? Do this. Need to prove that they’re similar? Do that. Need to prove X? Do Y and Z. I laid it all out for them, as clean and foolproof as a recipe book. With practice, they slowly learned to answer every sort of standard question that the textbook had to offer.

Months passed this way. But something wasn’t clicking. I kept seeing flashes and glimpses of severe misunderstandings—in their nonsensical phrasings, in their explanations (or lack thereof), in their bizarre one-time mistakes. Despite my best intentions, something was definitely wrong. But I didn’t know what.

And, more worryingly, I didn’t know how to find out.

I’d already coached them how to answer every question in the book. How, then, could I diagnose what was missing? How could I check for understanding? For every challenge I might give them, every task that might demand actual thinking, I’d equipped them with a shortcut, a mnemonic, a workaround. The questions were like bombs defused: instead of blasting my students’ thoughts open, they now fizzled harmlessly.


Orlin is describing his mistakes as a novice teacher. But this is the inevitable consequence of the "alignment" being pushed by proponents of scientific teaching. They would probably say that the student should initially figure out the procedure instead of being taught it, and this might indeed be better (or not), but it remains true that when the exam rolls around, all we will see is how well the students remember what they were taught. We will have lost our tools for modeling their minds and assessing their understanding.

No comments:

Post a Comment