Nutritional Research Found Full of Empty Calories

A huge scandal has broke across the Nutrition Industry as one of the most prominent researchers in the field, Brian Wasink is having basically all of his lifetime of research called into question.

You’ve probably come across Wansink’s ideas at some point. He researches how subtle changes in the environment can affect people’s eating behavior, and his findings have made a mark on popular diet wisdom. Perhaps you’ve adopted the tip to use smaller plates to trick yourself into eating less, moved your unhealthy snacks into a hard-to-reach place, or placed your fruit bowlprominently on your kitchen counter. Maybe you’ve scoffed at the “health halo” marketing of a decidedly unhealthy food, or chosen 100-calorie snack packs to control your intake.

What happened? Well, basically Brian simply started opening up about some of his methodology on his private blog.

Things began to go bad late last year when Wansink posted some advice for grad students on his blog. The post, which has subsequently been removed (although a cached copy is available), described a grad student who, on Wansink’s instruction, had delved into a data set to look for interesting results.

If you've ever studied statistics or the social sciences, that last sentences should have gotten your attention fast. Called p-hacking, it is one of the most central tenants to good research that you do not go hunting through data to find surprising correlations. If you are curious as to why not, as always, the comic XKCD does a terrific job of explaining how poorly this strategy works in real life.

Wansink later added a note onto the post, arguing that “P-hacking shouldn’t be confused with deep data dives—with figuring out why our results don’t look as perfect as we want... Cool data contains cool discoveries.”
Using existing data to answer new questions is fine—plenty of excellent research does this with census data, for example. But if you don’t define your question before you go leaping inyou could come out latching on any significant p-value and calling it real. “It doesn’t matter how ‘cool’ your data are,” writes statistician Andrew Gelman on his blog. “If the noise is much higher than the signal, forget about it.”

Honestly, no part of me is the least bit surprised that that one of the fathers of nutrition science fundamentally misunderstands math. I have long argued that nutrition is one of the least rigorous sciences I have ever come across. While not all of his work is peer reviewed, much of it was, why didn't they catch some of the flagrant violations unearthed in the article?

“Historically, we have not asked peer reviewers to check the statistics,” Brown says. “Perhaps if they were [expected to], they’d be asking for the data set more often.” In fact, without open data—something that’s historically been hit-or-miss—it would be impossible for peer reviewers to validate any numbers.

This needs to change, fast.