Research likes dealing with the average. If you fall far outside the average, you might be in trouble. For years, only the average received any attention. For instance, if you looked at an intervention study and the average group improved by a significant 30seconds, then whatever the intervention was worked, despite the fact that in many such studies there is usually an outlier or two who saw no improvement. No one asked why certain individuals improved more or less. Who cares about the individual?
Recently though, more attention has been paid to those who didn’t see any improvement or change. The Scientific community buzz word is “non-responders.” A couple years ago you heard the term used in relation to altitude training. No matter how well designed the study was, in almost every altitude related research study you’d have a group of non-responders who showed no changes. Flash forward to present day, and in the exercise research that word is popping up again. This time though, it’s used for those who don’t show changes in strength or endurance following a standard training program. It’s a large phenomenon that goes across several parameters from strength to endurance to health. According to Timmons (2011) for most variables, about 10% of the study population is a non-responder, while in some variables such as changes in insulin sensitivity up to 20% are non-responders.
For instance, in recent strength training studies, despite the same program for everyone, increases in muscle size vary from no change at all to a 60% increase! The same thing can be seen in changes in VO2max, mitochondrial density, and on and on. So what’s the problem? Studies are built for the average, ignoring those who fall outside those realms. In coaching, it’d be like if we took a 7 man cross country team, trained them all the same and who cares if a couple don’t improve as long as the majority (4) did. That seems kind of unfair to the few who didn’t improve at all! Timmons et al. put it best when talking about the non-responder phenomenon:
“It is also an observation that is largely ignored by the majority of researchers interested in the health benefits of exercise training, presumably because the focus has been on the “average” health benefits within a population and the desire to have a simple health promotion message.”
The last portion is particularly telling. In essense, it's the desire to have a one size fits all recomendation, or in training terms, a magic training plan that works for all...
Let's look at why we might see non-responders...
As a coach, the main answer to why people are non-responders seems pretty obvious. If you’ve coached at any level you know that despite giving the same/similar training, the results can be drastically different. Fortunately, as coaches we have quickly realized the need for individualization. Two runners who might have the exact same PR’s can and do respond completely differently to training. For instance, if we have two milers with the same PR’s, one might need more threshold type work to improve aerobically while the other might need more igloi style aerobic intervals to get the same adaptation. Or an even simpler example would be two runners, one who needs 80mpw to see aerobic benefits, while the other needs 110mpw. If we did a study and gave both guys 80mpw, one would be a responder while the other (the guy who needs 110mpw) would be a non-responder
In these examples, and in many cases, the problem isn’t the person being a non-responder, it’s that the stimulus is wrong. In the real world, you see it as a runner’s race times not improving, while in the research world you might see some variable not changing after a training intervention.
It’s just my gut feeling, but a lot of the “non-responders” seen in the research literature is probably due to the wrong stimulus. Of course people have a highly individual response to training but the goal should be to figure out what kind of training each individual responds best to.
How much of the non-response in studies is due to the wrong stimulus and how much is due to an actual non-response is unknown...
Another factor to consider is whether the measurement actually measures what people think it does?
We know the ultimate goal is performance changes and performance changes are hard to measure accurately (any coach will tell you that…just look at how difficult it is to get 5 guys all on the same CC team performing well on a specific day…). A lot of times, research uses physiological parameters like lactate threshold, VO2max, running economy, anaerobic capacity, etc.
These are all useful parameters to a degree, but the problem is that often they are used as surrogate markers for performance. For instance, many studies rely on VO2max, and make the leap that if VO2max is improved performance is improved. Or they make the link that if VO2max is improved aerobic abilities are improved…Which is not the case. Again, we’ll go to a quote from Timmons et al. (2011) in looking at the molecular basis for these claims:
“Thus, based on the available human data, aerobic capacity is an important predictor of human health (6, 7, 40, 55, 69); improvements in aerobic capacity can be predicted from the expression level of a group of non-exercise-responsive genes (in muscle) and that the molecular processes stimulated in the high responders (for aerobic capacity) involve calcium signaling, extracellular matrix signaling, and promotion of angiogenesis a(91, 92). In contrast, improvements in aerobic performance relate more to alterations in muscle energy metabolism (100) and it would be expected that the genes that control the variable training-induced improvements in performance will be distinct from those that control the health-related gains in aerobic capacity.”
To summarize: aerobic capacity and aerobic performance are different!! And one more quote to get the point across:
"That is, it is a mistake to assume both of these parameters are always directly coupled."
As I always like to give practical examples, one is the interpretation of the now famous TABATA workout. It’s a repeated sprint workout that has claimed benefits of improving aerobic and anaerobic abilities. The problem with the study is that it found that in untrained people VO2max increased after doing repeated 10sec maximal sprints. That’s not unexpected for a lot of reasons. The problem comes when people interpret this as saying aerobic performance is better attained by doing 10sec sprints (cough Crossfit cough). That would be the wrong interpretation…. (for clarification, another interpretation is that using Noakes CGM, it’s not surprising that maximal sprints increase VO2max is not surprising as VO2max is directly related to muscle fiber recruitment…there are other interpretations too, which I’ll go into detail another time)…
Another quick example is when researchers looked at Kenyan and European runners back in the 90’s they used aerobic capacity as the sole marker for performance. This led to some suspect conclusions as researchers consistantly stated that there VO2max didn't change or were similar between groups. For instance in a study by Larsen and Saltin, they were looking at the trainability of Kenyan versus Danish boys. They relied on VO2max as a measure of "initial fitness" to conclude that both groups were at the same initial fitness levels and then after 12wks, the kenyan boys ran 10% faster. The problem is that VO2max does not equate to initial fitness. Whose to say the kenyan boys weren't 10%, 5%, or 15% faster when they started...
The bottom line is that markers and performance aren’t are always directly correlated or related.
What’s the practical impact?
One of my grad school professors was fond of evidence based practice, which makes complete sense. The problem though is that when all the research is aimed at the average, as Timmons points out, what do you do when you get with the outlier? You’ve got to deviate from the research, sometimes by a whole lot. So what do you do as a coach reading the research?
1. Use it as a guide, not as something set in stone.
2. Don’t try and fit your individual into the research.
3. Pay attention to how your individual athlete responds to every kind of stimulus.
I can’t tell you how many times I’ve talked to fellow coaches who continue to use some sort of training method, despite seeing no practical benefits from it. When asked why they keep doing it, it’s because the research says it works. Well, that may be true, but the research didn’t test your exact athlete who may be doing different workloads and volumes, and may have a different individual physiology (Fiber type for example) than those in the research. It doesn’t mean the research is wrong, it just means that if it’s not working, don’t hold onto it because it’s “research based” instead try and adjust the stimulus to get the adaptation you are looking for.
A practical example would be with lactate threshold. I’ve talked about this before, but in general an athlete more Fast twitch orientated for his event will need slightly different work to get the same increase in Threshold than a Slow twitch orientated athlete. They need a slightly different stimulus. A ST athlete might need more constant traditional threshold work, while a FT athlete might need more progressive work where he works above and below the threshold and then add in more medium intervals at faster than LT.
In a recent Running Times article coach and exercise Physiologist Pete Pfitzinger demonstrates this wonderfully. He writes about what he’s changed since his RT column days. In one section he describes how in the past he subscribed to a very defined window in which tempo/threshold work should be done. Now, he feels like there is a much wider range of paces that tempo work should be done in. This is similar to another coach, Jack Daniels, who originally had a very narrow threshold “zone” in which he said stuff done much slower was essentially a black hole of training. In his recent book, he’s adjusted that to a wider range too, though not as much as Pfitz. The point is that, these guys based their views on the research which said to improve your threshold run at this exact speed…Well that worked for many, but not all. And it was quickly realized that things needed to be adjusted.
The point of the example is not to become a slave to the research. Not because the research is wrong, but because we still have a lot to learn and figure out, especially on the individual response to training and studies can’t currently discern this.
The Bottom Line:
I’ll end this with another quote from the Timmons et al. (2011) paper:
“Probably the single most important philosophical question to raise at this point is why, given our apparent recent heritage as an “active” hunter gatherer (18, 56), do we have a significant number of humans unable to mount a strong physiological adaptive response to physical activity? Is it the case that for some subjects we provide an inappropriate pattern of stimulus for their particular genotype? We are far away from a scientific basis for tailored exercise prescription for the general public…”
There’s a lot left to learn, and it’s about time it’s being realized that there is a large individual response to training that has largely been ignored in the research world. It’s probably one of the reasons why coaches training methods differ from pure research based methods…
It’s not the research that’s bad. It’s whether us as researchers ask the right questions and interpret it correctly…which is a hard thing to do.
Labels: scientific research