If nutrition research is flawed, then why should I trust an evidence-based dietitian?

Nutrition Research.png

As I was scanning my twitter feed a couple of weeks ago, I came across an article that caught my attention and prompted the writing of this blog post. It said: 

Almost 40% of peer-reviewed dietary research turns out to be wrong. Here’s why. 

WOW! 40% completely wrong! I immediately had one of those out of body experiences where I saw the past 15 years of my life flash before my eyes. To think that 40% of the articles I had read for papers I had written, presentations I had prepared, blogs I had researched and patient’s nutrition issues I had investigated were wrong made me feel sick. I had been a phoney for the past 15 years. Ugh. I hate fakers. 

But then I took a deep breath, read the article (I recommend that you do the same) and realized that I knew precisely what the author was talking about. In fact, maybe my six years of post-secondary education did teach me something! I may not have known the “40%” statistic, but I knew the reasons why and it served as a good reminder. 

Despite this post containing more academic content (including statistics, research methodologies and other perhaps yawn-worthy material), I hope that you will take 5 minutes to read it. In doing so, you will save yourself a great deal of heart ache and unnecessary dietary maneuvers that are based on, well, crappy research, as you are about to find out.



Alright, so why is nutrition research flawed? 

This is a big can of worms to open and admittedly, I don't have my PhD and I am not a statistician so I am perhaps not the most qualified to be writing this piece. Nonetheless, here are some reasons, including two explored in the article, and others that are well-established in the research world, as to why many nutrition research papers may turn out to be flawed. 

The use (or misuse) of null hypothesis testing.

Geeseh! Why did I pick the most difficult one to explain first? Long story short, many studies aren’t designed to prove what they want to prove, for example, hypothetically that "eating beans cures cancer". Instead, they set out to prove that any other potential connections between eating beans and curing cancer is just a matter of random chance. If the statistics from the study conducted agree, then the assumption is made that since there is NOT enough evidence to prove NO connection between eating beans and curing cancer (not a matter of random chance) then eating beans must cure cancer. I know, that was super confusing with a lot of double negatives... As you can see, it isn't always the most straightforward process and there are things called variables that affect the results too. 

The inability to control for all potentially confounding variables.

Confounding variables are factors that can impact the result of a study and may cause a researcher to find a correlation between two variables where there actually is none. Common variables that are controlled in studies include the participant’s age, sex, ethnicity, weight, socioeconomic status, etc. Other variables that are controlled for are specific to the research study. For instance, in our example above, the researchers would likely control for person's who are allergic or intolerant to beans as part of their exclusion criteria.  

Nonetheless, it is downright impossible to control for every potential confounding variable, as sometimes they don’t even know what those variables might be. This is one of the many reasons as to why a lot of nutrition research is conducted in lab rats. It is far easier to control what rats do in their cages then to attempt to lock a group of humans in a cage. (P.S. You would go to jail if you tried to do that in a study!)  

The use (or misuse) of the P-value.

Many studies are considered to have “significant” news-worthy results when in fact, the results are frankly only “suggestive” at best. This involves what statisticians call the P-value (or probability value) that is calculated when analyzing study results. The current cut-off that defines “significant” results allows for a great deal of information to be considered significant when it really isn't. Setting up a stricter cut-off (as suggested in Clinton's article) would eliminate hype on results that are not really that strong. 

The small size and short duration of studies.

The clear majority of research studies involve a small number of participants and run for a limited time because it takes big money to run large-scale trials that last for decades. A small study doesn't give us the same powerful data that a larger study provides. Even without using statistics to prove my point, it's easy to see that a study conducted on a group of 5000 people for 5 years might be more convincing then a study using 5 people for 5 days. 

On a personal note, don't be afraid to enrol in a research study. They are always looking for participants. I did back in my Kinesiology undergraduate days (pictured below) at McMaster University and received free high intensity wingate training sessions with Dr. Martin Gibala's research group and three muscle biopsy scars as permanent proof of my commitment to science! More importantly, I developed a greater appreciation for the work that research groups do and the multitude of steps involved before a paper is published. 

The flaws with how studies are designed.

One of the best clinical research designs is the double-blinded, randomized placebo-controlled trial. This is where the researchers and participants have no idea what group the participants have been assigned to (which is completed in a random manner, not cherry-picked) and some of the participants are given a placebo. A big problem though is that nutrition research is hard to “blind” and/or have a “placebo”, especially when it comes time to studies that involve real food. Many people can tell if they are, for example, drinking cow’s milk or soy milk or rice milk … it’s hard to blind that. This means that lower quality study designs must be employed and consequently, the strength of the results is of a lesser quality. Different research questions call for different study designs but if you ask any researcher, typically the design that they WANT to employ is either impractical, unethical or too expensive. 

The inaccuracy of dietary records.    

Most nutrition research studies use data collection tools like food records, and food frequency questionnaires to estimate dietary intake. These tools rely on our human brains to remember what we have eaten over a short or long period of time. Have you ever lied about what you ate? Cue nodding of your head right now. I have lied (or, shall I say, not shared everything) about what I have eaten too … and so have millions of Americans over the past 40 years. Want proof of this? Check out this great article. If you're a tech-savvy entrepreneur, please consider developing a new way of recording a person's dietary intake that does not impact or interfere with their consumption of food.  


Bias is the prejudice in favor of (or against) someone or something compared with another. Where is bias in research? Everywhere. There is the bias of the researchers, bias of the participants, bias of the funding source, bias of the undergraduate student interpreting your food diary etc. There are ways to control for these biases, for example, in the study design, where both the researchers and participants are blinded to the treatment arm in which the participants are placed. In the end, there will always be some sort of bias present in all research, whether stated or unstated. We are in fact imperfect humans. 

So, as a dietitian who claims to be evidence-based, why should I trust you if the evidence is wrong?

Fantastic question. The above points illustrate very nicely why you should trust me (or another evidence-based registered dietitian) over any other non-regulated nutrition advisor, magazine ad or TV doctor (ahem, Dr. Oz). 

I understand how to interpret and read research properly.

Part of our training as registered dietitians is to take statistics and research methods courses to obtain a solid understanding of the flaws that are inherent in the research that we use to guide our recommendations. It was here that I learned about P-values, null hypotheses, confounding factors in research and many other fascinating topics. Yes, I’m a nerd. By the way, I recommend that you work with a dietitian who is a nerd because then you know that they can critically analyze information and won’t give you garbage advice to drink apple cider vinegar to lose weight when the quality of the evidence is poor/non-existent.  

I read. Every. Single. Day.  

My recommendations are based on evidence from studies that I have READ and that I have analyzed and interpreted through a critical lens. So, when something comes out to say that “coffee causes cancer” I don’t dump my java, or send an urgent email telling my clients to flush their coffee pots at work down the toilet. No, I find the study, read it from cover to cover, and draw my own educated conclusions and seek out the opinion of other academics who I trust in their respective fields. A fun afternoon for me is searching PubMed for the results of a new meta-analysis or attending a conference. How about you?! 

I tell my clients when there is no decent answer to their question. 

Hopefully most of us would agree that honesty is the best policy. I am happy to say that I am often heard telling my clients “I am not sure/we just don’t know.” Sometimes there isn’t good quality evidence to be able to give a black and white answer. So, when my client has a question like "does baby-led weaning lead to decreased picky eating at 3 years of age?" and I tell them, we don’t know or we have only very low quality evidence, I'm okay with that. They might not like that answer but it's the truth and I'm all about the truth. You can thank my parents for that.

I try to limit my biases as much as possible, but realize that, I too will be biased. 

We are all biased. I bring my own inherent personal biases into my practice. However, I know what these biases are and remind myself of them daily so that I can do my best to limit the impact that they have on my advice and how I interpret nutrition research. Acknowledging that you have biases as a health care provider is important and yet many providers fail to admit their own prejudices. 


So where do we go from here? 

I think that the author of the article which inspired this blog provided a great challenge, that I am happy to pass along to you as well. 

“Your job here, should you choose to accept it, is to ignore a huge percentage of the food research you read about. You want to send the message that journalists need to find better cheap, cute stories to lure you in, that university press departments need to find better subjects for press releases, and that you’re not going to put up with any hanky-panky from food bloggers and TV doctors. Until you see that 0.005 (strong P-value), you need to be a cold, immovable lump of stone. Sure it sounds harsh, but it’s for science. Are you with me?”


Take it one bite at a time,