Science versus Practice: Evidence Based training

A consistent theme of this blog is the battle between the scientific side and the practical side training. As I tried to express in this article on my conflicting passions, the constant tug of war that goes on between the two conflicting sides is something I frequently deal with. If you are a long time reader of this blog you’ll find plenty of articles steeped in scientific research, while at the same time you’ll see articles blasting research for not understanding the way things work in the “real world” of training. In this post, I’d like to address the central question of what’s right, science or practice?

In grad school, you are constantly taught to subscribe to what we call evidence based practice. That means everything you do should have some sort of evidence to back it up. This idea sounds great on the surface, but problems arise when you restrict the word evidence to mean only research evidence. There are two central problems with relying solely on research. First, we don’t know enough. There are many unanswered questions of how the body works and we have a long way to go before we understand even the majority of it. Secondly, research deals in averages, coaches deal with individuals. Just browse through any of your favorite studies and there are people who didn’t improve or where nothing changed, even if the study shows significant improvement in a variable. This occurs outside of training articles too. For instance, in Lieberman’s famous barefoot study, 2 of the Kenyans adolescent’s who had never worn shoes heel struck. Why? I have no idea, but it shows that for some unknown reason even runners who grow up barefoot and have never worn shoes occasionally heel strike. Maybe it’s as simple as he saw a heel striking runner and imitated him. Who knows but the point remains that even when we have firm evidence, most of the time there’s always a few non-responders. In coaching, we can’t say, “oh you’re a non-responder, sorry.” We have to figure out how to make that person a responder. Additionally, training is complex, research is limited. It’s IMPOSSIBLE to isolate all the variables that go into a training program and to know what occurs over multiple years. A study can’t be done to discern every little effect.

And finally, I’d guess that most people don’t understand how research is done and what it means. This is a subject for an entire post in itself, so I’ll save it for another time. But correlation versus causation, the use of “soft” measurements (i.e. measuring changes in VO2max as a filler instead of what we really care about, performance), and various statistical methods all affect what the study really means. Although not related to exercise, a quote from an article in The Atlantic which I’ll address shortly, sums things up pretty well in terms of nutritional, drug, and some medical studies and how we so often get headlines of “grapes reduce cancer risk” then next week it’s chocolate, then next week it’s wine, etc. Additionally, it touches an the earlier topic of the complexity and self regulating mechanisms that the body has:

“For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.”

With all of this being said, to simply dismiss science for the above reasons is foolish. It provides reasoning, theory, and evidence for what we do. Before exercise science was around we pretty much guessed at what we did and evolved through trial and error. When training really evolved is when coaches started to take scientific theory and apply it to training. The key here is that they didn’t just copy what research X said, they took what the research said and figured out what that meant in terms of real world training. Evidence is littered throughout coaching history. The popularization of intervals came about by Woldemer Gerschler getting the idea that if we stressed the heart by bringing up the heart rate and then let it lower incompletely during the rest we could enhance it’s ability. Thus classical interval training was born. Hans Selye’s work on the General Adaptation Syndrome provided coaches with a basis of how adaptation takes place. Upon integration of this idea, coaches could now use this theory to figure out how better to mix harder and recovery days, instead of relying on the old 4-5 days a week of repeated intense interval training.
The point is that the good coaches, take the science and don’t just copy what is done exactly in a study, as in when a study finds that doing 30sec sprints with 30sec rest 3x a week improves aerobic capacity, so that’s what we’re going to do! They take what that study finds and figure out what it means and how it fits into real world training.

The key is to understand how to use the science.

So what’s a person to do. As I have pointed out before, I often rely on what one of my Professor’s, Jason Winchester, called the three stool leg test. You have research, theory, and practice. If you have all three, it’s almost certainly a good idea to implement it. If you have 2 of 3, it’s fairly likely that it works and it depends on the strength of the 2. If you’ve only got 1 of 3 going for it, it probably doesn’t work. The beauty of using the 3 stool leg test is it blends science and practice, and compliments it with theory which in itself is a blend of science and practice. The theory part is why I argue that coaches need to know the science. This can be seen in the coaching works of Renato Canova, who often uses theory based on science to develop training ideas. A couple quick examples would be in his strength endurance circuits where he uses the knowledge of lactate and muscle fibers to come up with a circuit, or the use of training to improve MAXLass (maximum lactate steady state) at race pace.

So the best coaches, in my opinion, aren’t the pure exercise physiologist or the pure old school coaches who know nothing about the science, but instead those who know both sides well enough that they can blend together the old school and the new school. A perfect example of this might be Claudio Berardelli, who is coach to many of the top East African runners in the world. He’s got a PhD in exercise physiology but seemingly blends the practical aspects together. Here’s an interesting presentation by him in which he uses theory to try and figure out the correct training for Kenyans at altitude:
http://www3.unitn.it/events/icms07/download/presentazioni/Rossi_H_MSH2007.pdf
What about doctors?
But wait, I’m not quite done yet as I haven’t answered the central question. Before you think that this problem is limited to exercise science and training, I’d like to point out some other facts.

In the medical community, those doctors who we all think do things by the book in terms of what the research says, actually don’t. According to research, that was referenced in Ben Goldfarb’s book Bad Science, only 13% of all treatments used by doctors have good evidence, with an additional 21% of treatments that are “likely” beneficial. Depending on the specialty, between 50-80% of all medical activity is evidence based. So the doctors too don’t rely entirely on an evidence based practice.

I’d like to end with a quote from an article on researcher John Ioannidis, who specializes in researching research. Specifically, he’s shown that much of the research that we rely on is flawed. The article itself is a fascinating read and shows what can happen when we rely entirely on “evidence” without critical thought. The quote below however is in regards to the interaction between doctors and researchers. It’s interesting because the gap between researchers and coaches is just as wide, and neither group really understands the other for the most part (which is why you have researchers proclaiming low volume high intensity training as the magic bullet, even though it works horribly long term in the real world for endurance athletes). Anyways, he makes some great points, in acknowledging the balance that needs to be had between doctors, research, and science.

Later, Ioannidis tells me he makes a point of having several clinicians on his team. “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians. It’s not that he envisions doctors making all their decisions based solely on solid evidence—there’s simply too much complexity in patient treatment to pin down every situation with a great study. “Doctors need to rely on instinct and judgment to make choices,” he says. “But these choices should be as informed as possible by the evidence. And if the evidence isn’t good, doctors should know that, too. And so should patients.”

What does this all mean? You should strive for balance. We can’t go entirely to either of the extremes.  We don’t want slaves to the research or to be quacks who use pseudoscience to sell miracle cures or approaches. As shown above, even doctor’s can’t be slaves to research. The best doctor I know is a problem solver who uses his knowledge of science and how the body works to figure out the best way to treat you. He doesn’t just give you a drug because X study says it will get you mildly better. There’s a reason why pure exercise physiologists are seldom great coaches. There’s also a reason why people who know nothing about the science, like your typical HS football coach turned CC coach, are pretty bad at coaching. The best coaches or practitioners are those who can blend the two. How much of each? That’s the question that each person needs to answer. In running it seems like most American runners and coaches are afraid of the science compared to even the European coaches or especially coaches in other endurance sports like Cycling, speed skating, rowing, swimming or Cross Country Skiing. My feeling is the backlash in US running is most likely due to a misunderstanding of how to properly use and integrate the science. Use it properly and keep it in balance. That is the key.

 

Get My New Guide on: The Science of Creating Workouts

    12 Comments

    1. marathonmaiden on November 18, 2010 at 5:47 pm

      great post. i've always found the best people to turn to advice hit that balance between science and practice. that's why i go to the doctors i go to because they think things through and look at the big whole picture

    2. john on November 18, 2010 at 8:31 pm

      Well said. Our team of sports physicians, researchers, biomechanists and physios have looked at the research, some case studies, the trend towards minimal trainers and the various running theories. From this collaboration we have generated a new paradigm to explain chronic exertional compartment syndrome based upon biomechanical overload. The most exciting aspects are the results we are getting with our patients with gait modification and the right footwear. The research tells us that conservative treatment doesn't work. Next year we should have the evidence to prove otherwise. In this case the research is following the practice rather than the other way round. Interesting how much resistance there has been to introducing a non-evidence based management program despite showing why. Now the pressure goes on to get the evidence.

    3. Andrew on November 18, 2010 at 10:46 pm

      Hey Steve,
      This is off topic, but I've noticed Ryan having a banner freshman season. A true freshman doing what he's doing is really something. On the high school scene I see Craig Lutz pounding out one big performance after another. Props to him but I can't help wonder if he can sustain it. Great time of the year for XC fans!

    4. stevemagness on November 18, 2010 at 10:53 pm

      THanks for the comments guys. It's an interesting balance that has to be struck.

      Andrew- Coach Hayes at UT is doing a great job. Ryan's really transitioned well to college training and the 10k. Just making it to NCAA's as a freshman is a great experience. Also, Will is also doing great at UT, he made their top 7 at conference, which is pretty dang good as a freshman to make the top 7 of a team who qualified for nationals

    5. RH on November 19, 2010 at 4:38 pm

      Well said.

      My own take on it is that most science is statistical evidence. Hence, by defenition it only says something about a whole (study) population, but nothing about one particular person. It is a bit like science in a court room. Knowing that eyewitnesses identify the wrong person in 30% of the cases doesn't tell a judge whether a particular witness misidentifies a suspect. It merely tells him to be cautious.

      The most it can do is to give some sort of best practice when you don't know enough about the particular case. So, great if you have to make an educated guess or write an one-size-fits-all 'improve your performance in just six weeks' schedule, but of a much more limited value in coaching world class athletes, who are by definition far from the average case.

      On the other hand, the problem with practical experience is that it is often difficult to assess if it does anything a all. There is often no clear feedback.

      The prime example is what may be considered the birth of evidence based medicine. For some 2000 years the standard therapy for pneumonia and other diseases was bloodletting. Experience with bloodletting was compiled in a vast body of medical literature and the most celebrated physicians boasted thousands of bloodlettings a year. In 1836 a guy called Pierre Louis started actually counting if pneumonia patients lived longer after bloodletting. Well, actually, since it would be unethical to send patients home without a bloodletting, he recorded mortality among patients who were bled in the first days after they fell ill and patients who were bled days later. If a therapy is any good, early treatment should give better results that later treatment, he reasoned.

      Unsurprisingly to us he found that earlier treatment didn't quite improve the chances of survival. Of those who were bled in the first four days, 18 out of 41 died, whereas among those that were bled later, 9 out of 36 died. He concluded cautiously that "the influence of bleeding, when performed within the first two days of the disease, is less than it seems at first sight, and that in general its power is very limited."

      So, indeed, one has to find the right mix.

    6. Anonymous on November 24, 2010 at 8:59 pm

      I've been told coaching isn't rocket science. I agree. It's also physiology, biochemistry, physics, history, observation, sound decision making, relationships, and art.
      And why include rocket science? Because I hope my athletes run REALLY fast.

      j

    7. Mark E. on November 26, 2010 at 11:29 pm

      Steve,
      I was wondering if you could write something up about the effectiveness of supplementing with BCAA's and/or beta alanine for runners.

      Thanks.

    8. jj on November 29, 2010 at 8:10 pm

      Hey Steve,
      I recently read Running; Biomechanics and Exercise Physiology Applied in Practice by Bosch, and it made me re-think some of the common models in running.
      I'd be interested to see your views on different periodization models. Matveyev vs Verchoshanky vs Tschine's models for example. It was claimed that Matveyev's model "is not based on biological laws, but merely describes general plans for training successful athletes." I've also read (if I'm correct) how Vern Gambetta is a proponent of working on speed & coordination for example at the beginning of the year when the athletes are fresh. What do you think about this? I've read that you believe in doing speed year round (if I'm again correct), be it short sprints or whatnot. Do you know examples of successful middle and distance athletes who use various models (that stray from Matveyev's model)?
      Thanks!

    9. Anonymous on December 6, 2010 at 1:57 pm

      Hey just a question kinda not related to this post but didnt know where to ask.

      In your strength endurance hill repeats video, would you consider the 4 repeats as a moderate workout day as in your recent article for running times moderate workouts vs. hard workouts? Or would you do that in connection of a workout on the track and make it a hard day altogther?

      I was thinking about this would be a moderate day. Run 3-4 miles to hill. Do repeats. Then on the way back do some hard 60m sprints with full recovery or about 300m steady sprints with full recovery.

    10. Michael Snijders on December 7, 2010 at 8:37 am

      I followed an advanced coaching course given by Bosch when he was writing the book. During that course, we (the students) even proposed that Matveyev's model is not even a preplanned model, but more of a descriptive model of what European coaches usually did at the time. And what they did was mainly dictated by the seasonal weather of continental Europe. So the training intensity and volume is adapted to the weather. That many succesfull athletes used the model was simply because they had no other choice. The weather is what it is, the facilities are nearly always outside, so december-januari always is going to be high volume, mid intensity. Less volume, and you'll freeze, higher intensity and you'd need longer breaks (and freeze) or you'll just injure yourself (ever tried doing speedwork in midwinter). Most European coaches still use the model, not because they think it is the best, but mainly because they have little other choice.

    11. jj on December 10, 2010 at 12:39 am

      Something by Vern:
      Frankly I had never heard the term until two weeks when I was lecturing at the English Institute of Sport. I am not sure if this is reverse periodization but here is a model of the way I do it and many people today who are producing results are doing it:
      Get them Strong
      Build Foundational and Basic Strength
      Get them Fast
      This is the application of the strength, more plyo’s and explosive work
      Get them Fit
      Use the strength and speed as a base to develop fitness that is appropriate for the sport or event
      It is obviously a bit more complicated than this, but this is the substance of it.
      The antiquated concept of building a huge aerobic base and then gradually getting more intense is very flawed. It worked in the days when competitions were few and concentrated in a “competition” phase

    12. Anonymous on February 23, 2011 at 8:31 pm

      Great blog. Thanks Steve. More a bike man myself, I still find lots of interesting stuff to read. Running's the mother of endurance I guess. Won't get me running though, ain't got the legs for it. Keep 'm coming!

    Leave a Reply