Every day we listen to studies that prove that one food causes cancer, another that causes heart disease, and that magic supplement will help you to live forever. The media loves to report sensational stories about the dangers and benefits of everything. And we cannot blame them. These are the types of stories that capture the eyes and the “I like” on Face book. Unfortunately, these are also the types of stories that are easily misrepresented and misinterpreted.
However, it is necessary that you have a basic understanding of how to interpret fitness and nutrition research. What are the main types of studies carried out? What are the advantages and disadvantages of these studies? What statistics are useful to see?
With this knowledge and a little practice, you will have the ability to interpret the research by yourself and reach your own conclusions. Definitive guide on fitness and nutrition research
How to understand fitness and nutrition research
Also called observational studies, these are the types of studies that are generally discussed in the news. There are several different types of observational studies, but we will focus on so-called cohort studies.
1.1 Influence of cohort studies on fitness and nutrition research
Cohort studies look at groups of people, called populations, over a period of time. When tracking these populations. Researchers try to determine how certain factors, such as foods, affect certain outcomes, such as cancer.
There are two types of cohort studies: prospective and retrospective. Prospective studies are carried out from the present to the future: the subjects are monitored for a period of time while the data is collected. On the other hand, retrospective studies look at historical data from a moment in the past to the present. However, prospective studies are considered the more valuable of the two.
Unfortunately, observational studies can only measure correlation, not causality. In fact, one of the basic principles of statistics is that “correlation does not imply causality”. This is an elegant way of saying that although two things happen at the same time (correlation) that does not mean that one thing caused the other (causality). For example: are some people healthy because they take multivitamins or are healthy people who already tend to take multivitamins? Just think about it.
By their own design, observational studies simply cannot provide evidence to determine cause and effect. This means that you should not use these studies to make changes in your lifestyle and behavior, for that there are randomized controlled trials.
2.Randomized controlled trials
The randomized controlled trial (RCT) is used to test the effectiveness of a treatment or intervention within a population, unlike observational studies that find correlations, these trials find causalities.
Randomized controlled trials start with a hypothesis and a group of people (or, sometimes, mice). The subjects are divided randomly into groups (therefore, randomized). One group receives a treatment as a supplement, a type of diet or an exercise protocol, while another group acts as a “control”. The control groups do not receive any real treatment; they are used as an objective comparison to see if the treatment really had an effect on the group that is being evaluated.
The industry-funded research is notorious for providing truly positive and impressive results. Studies conducted by pharmaceutical companies and supplement manufacturers can be ingeniously designed to obtain favorable results, while failed trials are often not published.
We advise you to ignore results financed by the industry, and always look for independent investigations to verify any fact. In other words, always check the “conflicts of interest” section first before jumping to the details of the document.
The statistical significance appears in both observational studies and RCTs. It is a difficult number to calculate unless you have taken some statistics courses, but in a nutshell it means that if the results are statistically significant, you can be pretty sure that those results were not due to luck. However, something very important to keep in mind is that just because something is statistically significant does not mean that it is clinically significant. That is, statistical significance does not say anything about the magnitude of the results.
Clinical importance (also known as practice), on the other hand, tells you if the magnitude of the results is large enough for the treatment to be worthwhile. There is no established level to determine it; it’s really just a decision based on experience. For example, let’s say you’re trying a new weight loss pill. You give the pill to 1,000 obese people and the 1,000 people lose 1 kg. These subjects started the study weighing 135 kg and finished the study in a cut of 134 kg.
You can be pretty sure that these results are statistically significant: the pill almost definitely caused weight loss. However, are these results clinically significant? Probably not.
Although a study shows statistically significant results, it does not mean anything if the treatment is not clinically significant. If the pill helped each person loses 45 kg that would be clinically significant.
Deceptive statistics in fitness and nutrition research
The most misleading research statistic presented by the media is relative risk. This statistic is used both in observational studies and in RCTs. It is a measure of how harmful or useful something can be.
When you hear on the news that eating red meat causes a 50% increase in the risk of heart attack, what are you supposed to do? Obviously you’re supposed to throw all the meat in your refrigerator, become a strict vegetarian and be happy to have discovered how to live forever, right?
The 50% figure cited in the news is almost always a number called relative risk. The relative risk is the rate of some result in the intervention group in relation to the rate of that result in a different group. Take the example of red meat.
Researchers want to see if people who eat meat have more deadly heart attacks than people who do not eat meat, so they organized a large observational study involving 200,000 meat consumers and non-meat eaters over the course of 10 years. In that study they found that 6 out of every 1,000 people (0.6%) in the meat group died of a heart attack, while 4 out of every 1,000 people (0.4%) in the meatless group died of a heart attack. 6 deaths is a 50% increase over 4 deaths.
However, despite the normally useless nature of relative risk, there is a time when it is interesting. For example, one study showed that the relative risk of contracting lung cancer for male smokers and non-smokers is 2,300%. That means that male smokers are 23 times more likely to get lung cancer. Once the relative risk begins to exceed a few hundred percent, it might be worth investigating.
There is a statistic that is much more important than the relative risk: absolute risk: the difference in the rates of a result between the intervention group and another group. This metric is also used in both observational studies and in RCTs.
In the hypothetical study of red meat, 0.6% of people who ate meat died of a heart attack, while 0.4% of people who did not eat meat died of a heart attack as well. This is an increase in the absolute risk of 0.2%; 0.6% – 0.4% = 0.2%. In other words, when eating meat, you do not have a 50% increase in the risk of dying from a heart attack, in reality you only have a 0.2% increase. Does 0.2% seem clinically significant? Probably not.
A good reference point for clinical significance could be 5-10%. If 8% of the meat group died and only 0.4% in the meatless group died, then it is possible that people who eat this type of meat have something to worry about.
Related Wiki Link: Fitness and Nutrition Articles