Science versus Practice: Evidence Based training
A consistent theme of this blog is the battle between the scientific side and the practical side training. As I tried to express in this article on my conflicting passions, the constant tug of war that goes on between the two conflicting sides is something I frequently deal with. If you are a long time reader of this blog you’ll find plenty of articles steeped in scientific research, while at the same time you’ll see articles blasting research for not understanding the way things work in the “real world” of training. In this post, I’d like to address the central question of what’s right, science or practice?
In grad school, you are constantly taught to subscribe to what we call evidence based practice. That means everything you do should have some sort of evidence to back it up. This idea sounds great on the surface, but problems arise when you restrict the word evidence to mean only research evidence. There are two central problems with relying solely on research. First, we don’t know enough. There are many unanswered questions of how the body works and we have a long way to go before we understand even the majority of it. Secondly, research deals in averages, coaches deal with individuals. Just browse through any of your favorite studies and there are people who didn’t improve or where nothing changed, even if the study shows significant improvement in a variable. This occurs outside of training articles too. For instance, in Lieberman’s famous barefoot study, 2 of the Kenyans adolescent’s who had never worn shoes heel struck. Why? I have no idea, but it shows that for some unknown reason even runners who grow up barefoot and have never worn shoes occasionally heel strike. Maybe it’s as simple as he saw a heel striking runner and imitated him. Who knows but the point remains that even when we have firm evidence, most of the time there’s always a few non-responders. In coaching, we can’t say, “oh you’re a non-responder, sorry.” We have to figure out how to make that person a responder. Additionally, training is complex, research is limited. It’s IMPOSSIBLE to isolate all the variables that go into a training program and to know what occurs over multiple years. A study can’t be done to discern every little effect.
And finally, I’d guess that most people don’t understand how research is done and what it means. This is a subject for an entire post in itself, so I’ll save it for another time. But correlation versus causation, the use of “soft” measurements (i.e. measuring changes in VO2max as a filler instead of what we really care about, performance), and various statistical methods all affect what the study really means. Although not related to exercise, a quote from an article in The Atlantic which I’ll address shortly, sums things up pretty well in terms of nutritional, drug, and some medical studies and how we so often get headlines of “grapes reduce cancer risk” then next week it’s chocolate, then next week it’s wine, etc. Additionally, it touches an the earlier topic of the complexity and self regulating mechanisms that the body has:
“For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.”
With all of this being said, to simply dismiss science for the above reasons is foolish. It provides reasoning, theory, and evidence for what we do. Before exercise science was around we pretty much guessed at what we did and evolved through trial and error. When training really evolved is when coaches started to take scientific theory and apply it to training. The key here is that they didn’t just copy what research X said, they took what the research said and figured out what that meant in terms of real world training. Evidence is littered throughout coaching history. The popularization of intervals came about by Woldemer Gerschler getting the idea that if we stressed the heart by bringing up the heart rate and then let it lower incompletely during the rest we could enhance it’s ability. Thus classical interval training was born. Hans Selye’s work on the General Adaptation Syndrome provided coaches with a basis of how adaptation takes place. Upon integration of this idea, coaches could now use this theory to figure out how better to mix harder and recovery days, instead of relying on the old 4-5 days a week of repeated intense interval training.
The point is that the good coaches, take the science and don’t just copy what is done exactly in a study, as in when a study finds that doing 30sec sprints with 30sec rest 3x a week improves aerobic capacity, so that’s what we’re going to do! They take what that study finds and figure out what it means and how it fits into real world training.
The key is to understand how to use the science.
So what’s a person to do. As I have pointed out before, I often rely on what one of my Professor’s, Jason Winchester, called the three stool leg test. You have research, theory, and practice. If you have all three, it’s almost certainly a good idea to implement it. If you have 2 of 3, it’s fairly likely that it works and it depends on the strength of the 2. If you’ve only got 1 of 3 going for it, it probably doesn’t work. The beauty of using the 3 stool leg test is it blends science and practice, and compliments it with theory which in itself is a blend of science and practice. The theory part is why I argue that coaches need to know the science. This can be seen in the coaching works of Renato Canova, who often uses theory based on science to develop training ideas. A couple quick examples would be in his strength endurance circuits where he uses the knowledge of lactate and muscle fibers to come up with a circuit, or the use of training to improve MAXLass (maximum lactate steady state) at race pace.
So the best coaches, in my opinion, aren’t the pure exercise physiologist or the pure old school coaches who know nothing about the science, but instead those who know both sides well enough that they can blend together the old school and the new school. A perfect example of this might be Claudio Berardelli, who is coach to many of the top East African runners in the world. He’s got a PhD in exercise physiology but seemingly blends the practical aspects together. Here’s an interesting presentation by him in which he uses theory to try and figure out the correct training for Kenyans at altitude:
What about doctors?
But wait, I’m not quite done yet as I haven’t answered the central question. Before you think that this problem is limited to exercise science and training, I’d like to point out some other facts.
In the medical community, those doctors who we all think do things by the book in terms of what the research says, actually don’t. According to research, that was referenced in Ben Goldfarb’s book Bad Science, only 13% of all treatments used by doctors have good evidence, with an additional 21% of treatments that are “likely” beneficial. Depending on the specialty, between 50-80% of all medical activity is evidence based. So the doctors too don’t rely entirely on an evidence based practice.
I’d like to end with a quote from an article on researcher John Ioannidis, who specializes in researching research. Specifically, he’s shown that much of the research that we rely on is flawed. The article itself is a fascinating read and shows what can happen when we rely entirely on “evidence” without critical thought. The quote below however is in regards to the interaction between doctors and researchers. It’s interesting because the gap between researchers and coaches is just as wide, and neither group really understands the other for the most part (which is why you have researchers proclaiming low volume high intensity training as the magic bullet, even though it works horribly long term in the real world for endurance athletes). Anyways, he makes some great points, in acknowledging the balance that needs to be had between doctors, research, and science.
Later, Ioannidis tells me he makes a point of having several clinicians on his team. “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians. It’s not that he envisions doctors making all their decisions based solely on solid evidence—there’s simply too much complexity in patient treatment to pin down every situation with a great study. “Doctors need to rely on instinct and judgment to make choices,” he says. “But these choices should be as informed as possible by the evidence. And if the evidence isn’t good, doctors should know that, too. And so should patients.”
What does this all mean? You should strive for balance. We can’t go entirely to either of the extremes. We don’t want slaves to the research or to be quacks who use pseudoscience to sell miracle cures or approaches. As shown above, even doctor’s can’t be slaves to research. The best doctor I know is a problem solver who uses his knowledge of science and how the body works to figure out the best way to treat you. He doesn’t just give you a drug because X study says it will get you mildly better. There’s a reason why pure exercise physiologists are seldom great coaches. There’s also a reason why people who know nothing about the science, like your typical HS football coach turned CC coach, are pretty bad at coaching. The best coaches or practitioners are those who can blend the two. How much of each? That’s the question that each person needs to answer. In running it seems like most American runners and coaches are afraid of the science compared to even the European coaches or especially coaches in other endurance sports like Cycling, speed skating, rowing, swimming or Cross Country Skiing. My feeling is the backlash in US running is most likely due to a misunderstanding of how to properly use and integrate the science. Use it properly and keep it in balance. That is the key.