Put up your hands and step away from the journals.
It may be no surprise to anyone who frequents this blog that I am an information addict. As such, I scour blogs all over the place to find useful thoughts and ideas relating to training, nutrition, and overall health. However, over the past few months I’ve noticed something I’ve never really noticed before. Everyone seems to be quoting studies regarding training, nutrition, and supplements!
Excellent…or maybe not?
As a complete science geek you might think I’d be happy about this, but truthfully, it all has me a little annoyed. I can’t tell you how many times I’ve pulled the study mentioned in an article or blog post only to find that the study design was completely inappropriate or the statistics used by the authors were a hot mess.
While I appreciate the intentions of well meaning coaches and trainers, I’m concerned that the information being presented is often a little incomplete. With free access to places like Pubmed finding scientific abstracts is easy. But reading the actual studies and understanding them is a different matter.
Here are a few things you should note when reading a journal:
Impact is basically a measure of how frequently an average article in a journal is cited in a given period of time. Higher impact journals tend to be the most reliable in terms of quality of research.
Journals like Nature, Science, and Cell are very high impact journals. The journal of “How Much Hyooge Muscles I Can Get by Takin’ Dem Supplements” is probably not a journal I’d bother reading.
Although I’m simplifying a bit here, alpha is a statistical figure that scientists set to determine if their research is considered significant or not. (Stats experts please don’t crucify me here. I’m trying to simplify without losing the message.)
If alpha is set at 0.05 and the researcher confirms his or her hypothesis that training program X is better than training program Y, there is a 95% chance that this is true. This is good news.
Unfortunately, there is also a 5% chance that this happened due to random influences or other sources of variation. If this happens it is called a Type 1 error and journals tend to hate when this happens. This is bad news.
If alpha is set lower (at 0.01), there is a 99% chance that the result is true and is less likely due to random influences. If a study has reached significance at a lower alpha level you can be more sure that this type of error has been avoided. Good news.
On the downside, because you’re being so careful to avoid saying something is true when it is not, you might miss a training effect that is actually there. This is called a type 2 error. This is bad news.
Generally speaking, journals tend to like a certain level to be met so the authors sometimes have no say where they set alpha. Setting it in one place says a protocol is significant while setting it elsewhere might suggest it is not.
If a study says that a certain protocol wasn’t significant you sometimes have to look at alpha to see if it is set at 0.01 because they may have missed an effect that they would have deemed significant if they’d set it at 0.05. Sometimes you’ll even see a researcher sneak in an alpha value of 0.06 just to make their results significant.
3. Study Design
Generally speaking this is probably one of the most important things to consider and this is where I think many studies fall short. Simple flaws in nutrition studies can often be as small as not getting subjects to record their food intake and relying on subject reporting that their intake has remained the same.
A bigger flaw (mentioned recently in Alan Aragon’s Research Review) was that a study examining the effects of branched chain amino acid supplementation used a control group who got 64 grams of protein for the day and a BCAA group who got 109 grams of protein for the day (and an additional 90 grams of carbs). Obviously they found that the BCAA group was superior, but if you didn’t look closely you’d probably assume that this is because of the BCAAs instead of just total protein intake or carb intake.
Even things seemingly as trival as whether you use the same subjects for both protocols are important. For example, if you were looking at the differences between one type of contraction or another on hypertrophy you might be inclined to assign one group to train with concentric contractions and another group to do purely eccentric contractions.
However, when you go to do your statistics (you’d probably run something called an ANOVA), you might miss something because there is usually more variability between two people than might be introduced by the training protocol. Instead, your study would have more power to detect a difference in training type if you did one type of contraction with one arm and a different type with the other arm.
What’s that? You think it would be hard to get trained people to train each arm differently? You bet your ass it is! And that is part of the reason why you don’t see a lot of studies being done on trained individuals. Well…that and the fact that most training study participants are university students who are living on Kraft Dinner and beer because they’re the only ones who will let you do 16 muscle biopsies and train each arm differently for 12 weeks for $300.
There are so many possible issues here that I can’t even come close to touching on all of them, but I think you get the idea.
4. Funding Source
Personally I think there is a lot less to make of this than most people suggest, but it is still worth mentioning. In my experience, when a researcher is seeking private funding for a study they’ll design a study first, create a hypothesis AND THEN contact a company for sponsorship.
Obviously the researcher is going to reach out to a company who serves to benefit if the project proves correct (which it usually does if the researcher has done his or her reading of previous literature and knows what outcome is most likely). Of course, a company is also only likely to sponsor a study it thinks will serve to benfit them. Why on earth would they sponsor something that would disprove their product???
However, all if this has little to do with the study itself which is usually run by a grad student in pursuit of their Masters or Ph.D. and they have little dealing with the funding agency in the first place. They are not “hired” by the agency to produce outcomes.
In the end, the researcher expects a certain outcome, they ask a company for money to demonstrate this, and then they publish it. I’m sure there are some shady dealings out there, but I don’t think they’re as common as you might expect.
Utimately, I think what I’m trying to say is that unless you understand things like post-hoc analyses, calculation of statistical power, and the specific protocols used during the studies (and their assumptions) please leave the summarizing of studies to those who do.
Thoughts? Comments? Questions? Please post them below.