X-Raying A Proposed Change

Those of us working in education are constantly presented with Changes. I’ll use the term capitalized in this post to represent any number of materials, technologies, or methodologies, promised to better meet the needs of our students. While it is important to stay humble and open to fresh approaches in order that we may improve ourselves, it is not possible nor responsible to adopt every new proposal. Given the time, energy, and costs needed to implement changes in an educational setting, it is important to vet proposals as much as possible. In his book When Can You Trust the Experts?, Daniel Willingham gives numerous tips for evaluating educational research and spotting false promises.

“Don’t just do something, stand there.”

Or more specifically, “Stand there and observe.” Often times problems in education are addressed with new standards and more material, and while there are certainly times where this is warranted, “Throw a book at it!” is not a solution. Be sure of the problem that needs to be fixed. The better you understand the problem, the better you can identify a solution that fits.

“Look for the trail.”

In your first exposure to any Change’s literature, like websites or email notices, beware of claims delivering revolutionary changes. Any real change like that will come with a trail of scholarly articles revealing the gradual progress that led to it. If that is absent, then pass. Don’t feel bad ignoring the testimonials!

“Read like a scientist.”

For those of you going a step further and reading studies about teaching methodologies or tools you are interested in implementing in your classes, here are Willingham’s best rule-of-thumb tips. Favor peer-reviewed research for better quality. Look beyond the “It works” or “It doesn’t work” conclusion in the abstract. Your decision should rest more on the significance of the data.

1) What was measured? Is this the information you really need? Would you measure the effect differently? Revisit your understanding of the problem you want to solve.

2) Was there a true comparison with a control group? It is not enough to just say students improved. They must have improved more than they would have normally.

3) How large was the sample size? Willingham’s minimum for data to begin to be considered reliable is 20 participants each in the control and experimental groups. For those of us interested in a reliable Change, the more participants the better.

4) How much did the change help? If the three criteria above check-out, then it is less likely the results occurred by chance. Now, look closely at the difference in the reported data sets. Are they statistically significant? Keep in mind that the higher the study population, the smaller the difference needed to declare a Change worked or not.

The final judgment of whether or not to implement any Change, at least at the classroom level, must lie with you. Balance the opportunity costs of the Change, like time, energy, money, etc., with options of how you might improve what you have already. There is so much to explore!

References:

Willingham, Daniel T. (2012). When Can You Trust the Experts? : How to Tell Good Science from Bad in Education. New York, NY: John Wiley & Sons, Incorporated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s