Tuesday 9 April 2019

Visible Learning- Effect Size

Effect Size and Effective Techniques


Welcome to my second post on the book "Visible Learning" by John Hattie. In my previous post I discussed the practices of uninspired (bad?) teachers. This post is less on the generalities of teaching, and more on how this book measures effective teaching.

This book quotes effect size in just about every chapter. On the surface, it is pretty easy to decipher what it is about. The higher the effect size, the better the technique/item/etc. is. At the outset, they establish 0.20 as the minimum effect size that is "worth it" and 0.40 as a full year's worth of learning. A few examples of higher effect size influencers are "Teacher Student Relationships" (0.72), "Teaching Better meta-cognitive strategies (0.69), and "Teaching Study Skills (0.63). A few low level influences included "Home-Schooling" (0.16), "Open VS Traditional Learning Spaces" (0.01), and "Individualized instruction" (0.23).

You may not be surprised to learn that I quickly grew frustrated with the lack of clarity on what exactly effect size was. I made it through 96 pages before I broke down and looked it up in the appendix. Here is is

Do a pre-test, Do a post test.
Take the class average for the two and find the difference (post-pre)
Divide that by the average of the standard deviations for the two tests
Boom- Effect size.
So if you improve the scores a great deal by your technique, and it worked equally for all students (they were all brought up, so there was a low standard deviation), then by this book it was deemed effective.

The book later recommends teachers employ this formula and technique with their classes to determine how effective the teaching techniques are. I don't think this is a bad idea per-say, but it does leave a few large-ish holes in the overall view of education.

  1. It doesn't measure long term retention
  2. It only views the class as a whole. So if half the class gets 100% after the lesson, but the other half was not paying attention and fails, the technique was a failure
  3. Just because it is effective, does not mean it is necessarily good for students.
I don't think any of those three reasons is enough to say "don't do this", but I feel it is important for me to keep it in mind. Numbers can be a tempting mistress to math and science teachers.

Individual effect sizes are possible (post-pre test)/class standard deviation

From videos of John Hattie, I've learned the following:


  • Classes should have clear learning intentions
  • Classes should have challenging success criteria (for both struggling, and accelerated students)
  • Provide feed back
  • Learn Visibly myself
  • Provide an opportunity for students to determine their own progress
  • Classes should trust the teacher so they are comfortable saying "I don't know"
I do my best to make my class comfortable, and convey care for students, but students almost never say "I don't know" in my class. Am I alone? How Can I improve this? Can I improve this?

No comments:

Post a Comment