Evidence based teaching has dominated the field of education ever since the beginning of the 21st century, when the No Child Left Behind Act (NCLB) included it as part of its wide-ranging education law that applied to all schools receiving federal funding (e.g. most schools). The term itself dates back to 1991 when a Canadian physician, Gordon Guyat, coined the term ”evidence-based medicine” to indicate ”best practice” medical interventions and methods that were supported by a rigorous scientific literature.
In the context of education, evidence-based teaching involves the use of teaching strategies that have been validated through empirical research techniques; especially through controlled studies, where an intervention is used with one group of students for a given period of time, and the results are compared with a comparable group of students who have been studying the same subject without the intervention and serve as a control group. When the two groups have been randomly assigned, this is said to give even greater power to the research results.
If the intervention group shows greater achievement in the subject being taught (e.g. language arts, math, science etc.) than the control group (as measured by a valid and reliable assessment instrument), then that particular intervention (e.g. collaborative learning, peer teaching, small group discussion etc.) is henceforth deemed ‘’evidence-based.’’ To create even more powerful evidence for a given teaching strategy or intervention, groups of such studies are often pooled together to create a ‘’meta-analysis’’ that benefits from the combined research findings of many educational researchers.
This all sounds very convincing as a way to determine which teaching strategies have the greatest promise of increasing student achievement. But there are several problems with evidence-based teaching that aren’t typically disclosed to parents or acknowledged by teachers and administrators. Here are five reasons why evidence-based teaching is not all that it is cracked up to be:
- The ‘’evidence’’ collected in ‘’evidence-based teaching’’ is almost always standardized test score results. I’ve written elsewhere about what is wrong with standardized testing. Let me just point out here that these tests purport to measure student achievement in a given academic subject, when they actually just test a student’s ability to perform well on a standardized test. Some students are natural ‘’test takers’’ while others don’t do so well even though they may have the same level of knowledge about the subject as the good test taker. The use of the term ‘’evidence’’ is actually part of the problem. In fact, there are many different types of evidence that could be collected regarding the impact of a teaching strategy: evidence of student engagement (e.g. how excited the students are to be learning that subject), student attendance (e.g. those who drop out are less likely to have been impacted by the intervention), individual student case studies (e.g. the actual attitudes and inner processes of specific students), and many other types of evidence besides. In the future, we should be more transparent about these purported ‘’evidence-based’’ strategies, and say that what they really do is increase a student’s test score results. Of course, for some parents and educators, this is after all the main purpose of education. Many others, however (myself included), feel that education should be developing other more laudable skills, capacities, qualities, and attributes in students.
- ‘’Evidence-based teaching’’ takes the control of classroom teaching away from the teachers and gives it to educational researchers. Teachers used to have a great deal of autonomy in how they ran their classrooms. We trusted them to make the right decisions about how to go about delivering the curriculum to their students. As a result of this trust, teachers generated a wide range of strategies, resources, tools, and tips to help both groups of students and individual students excel in academic learning. In fact, there used to be a kind of folklore tradition, where the best strategies would be handed down from generation to generation in the teaching profession. Not anymore. Now teachers are told not only what to teach, but how to teach it. This lack of trust in a teacher’s ability to use their own tried-and-true methods contributes to teacher burnout and a teacher’s lack of engagement with their students. In a Gallup survey of teacher attitudes, 46% of teachers reported high levels of stress during the school year. In the same survey just over half of K-12 educators (56%) said they are ‘’not engaged’’ in their work, meaning they are not connected emotionally with their work and are unlikely to devote much extra time to their teaching duties, while around 13% are ‘’actively disengaged,’’ meaning they are not satisfied with their workplace and are likely to spread negativity among their co-workers. A good part of this dissatisfaction stems from the lack of trust in teachers and the fact that we’re now requiring them to teach in specified ways based upon the work of researchers who are themselves largely removed from the nitty gritty of actual classroom life.
- Evidence-based research in education is based on who gets funded. There are perhaps thousands of different teaching strategies available for teachers to use. But only a small percentage of these are even researched and given credibility as a result. In fact, researchers have to apply for grants from federal, state, and private institutions to pay the salaries and provide the resources that are needed to conduct formal research studies. Researchers thus have pressure on them to pick educational strategies to study that will have the best chance of being funded. There may be highly worthy teaching strategies out there that don’t stand a chance of being funded, and thus are left out in the cold as interventions and put on the non-evidence-based list of strategies (e.g. ”there is no support for the use of this method”).
- Real learning in the classroom cannot be quantified. Evidence-based teaching is based upon statistics. One of the most common metrics used is ‘’effect size.’’ Now I’m going to get a little technical, but bear with me! As it is usually used in the field of education, an effect size is the difference between the standard deviation of two groups, in this case: the treatment (or intervention) group and the control group. The standard deviation is a measure of the amount of variation in a distributed set of values, in this case, test scores. The way this is usually depicted is via a bell curve which shows the highest point in the curve as the mean or average test score, the downward sloping curve on the left indicating those who scored below the mean, and the downward sloping curve on the right indicating those who scored above the mean. In calculating effect size, you basically superimpose the bell curve distribution of test scores for both groups (see above graphic). If they coincide exactly, that means the effect size is 0 (i.e.. the teaching strategy had no effect). If the treatment curve is one standard deviation above the control’s curve (one standard deviation is about 34% above the mean), that would make the effect size 1. What this means is that someone scoring at the mean (e.g. 50%ile) within the intervention group did just as well as someone scoring at the 68%ile in the control group. Thus, a student in the control group who scored quite above average in his own group would only be doing as well as a student who scored just average with respect to his own intervention group. Now people throw different effect sizes around in education, and this is a big part of the problem. How do you determine what is an effective effect size? In other words, where is the line between judging a teaching method effective and one that is not effective. One promulgator of the effect size craze in education is New Zealand educator John Hattie. Go to this website and you will find effect sizes for 252 teaching interventions (which he bases on meta-analyses of each intervention or what he calls ‘’influences’’). He took the average of all these studies and found it resulted in a .4 effect size (otherwise known as Cohen’s d – don’t ask me about that!). This then is what has been used by many educators to determine the effect size of a teaching method: whether it is .4 or higher. I hope you can see, at this point, that we’ve left the real purpose of education and real teaching behind. What if a teaching strategy has a .3 effect size but all the kids love it and ask for it again and again? What if a teaching strategy has a 1.2 effect size but students have forgotten the material two weeks after the test was given? What if the teacher in one group was a ‘’natural born teacher’’ and the one in the other group was just a ‘’by-the-book educator?’’ What if one group took the test after a beloved teacher in the school building died, and the students in the other group had just celebrated a student’s birthday party? You can see that real life is richer and more complex than statistics. Numbers give the illusion of truth (‘’oh yes, we have numbers to back up our claims!’’), but in reality they’re not . . . well . . . reality.
- Evidence-based teaching ignores the activities and lives of real students. As you can see from this last point, students’ lives are multifarious, complex, deep, rich, and idiosyncratic (e.g. no one student is like another). Students have hopes, dreams, aspirations, challenges, and emotions that come up and subside in the course of a day. They are influenced by events happening outside the school, by different ways of learning, by different cultural backgrounds, and by so much more! The best teachers are able to tune in to each student’s uniqueness and determine that an instructional method that may work for one student would be entirely inappropriate for another student. They are able to zero in on the best times to use specific methods, and what should precede and follow each strategy. It may be the case that a teacher uses one strategy for a particular child, while the rest of the class gets something else. It’s the ability of teachers to differentiate instruction to the unique needs of kids that marks them out as effective instructors. To be asked to teach from a laundry list of ‘’approved’’ methods, interferes with this differentiation. In fact, this evidence-based instruction proceeds as if there were no differences at all between students, which is patently absurd.
Evidence-based instruction is only one part of a pattern of homogenization of learning that has captured the heart and soul of education over the past two decades. Along with it, we’ve had to contend with standardized testing, value-added measures (e.g. evaluating teachers on the basis of their students’ test scores), so-called ‘’personalized’’ learning (where computers use algorithms to parcel out ‘’knowledge nuggets’’ to students at their own ‘’level’’), and the ascent of ‘’data’’ as the most important outcome of learning (as opposed to, say, a child’s increased sense of self-worth, or the growth in their aspiration toward a certain career, or the twinkle in a student’s eyes as they rejoice in the results of a science lab experiment). Folks, I could have saved myself the ink and just told you from the outset: real learning is ”out” in our schools while artificial learning is ”in.” It’s sad but true. We need to speak out about these kinds of injustices meted out to our children. The fact that this charade takes place under the guise of ‘’science’’ and ”rock solid evidence” makes it only more despicable.
For more information about the problems with evidence-based teaching and other examples of ”miseducation” in America, see my book If Einstein Ran the Schools: Revitalizing U.S. Education.
This page was brought to you by Thomas Armstrong, Ph.D. and www.institute4learning.com.
Follow me on Twitter: @Dr_Armstrong