How many people reading this now know what an I-square value means when presented with it in a clinical research paper? It's a statistical measure that offers an idea of whether the variation in results in a meta-analysis (a study that looks at a number of papers investigating similar outcomes, and then pools their findings to give one definitive result) are due to chance, or due to the interventions or methodology being too different between each study (heterogeneity). In other words, it gives a measure of the reliability of the result presented. If there's too much heterogeneity, the result given is not quite as strong, because there is a suggestion that not every study included in an analysis is looking at the same thing. At one of our seminars last week one of the speakers - Pamela Dyson, a research dietitian at the University of Oxford - discussed the relevance of the I-square value in a meta-analysis investigating low carb diets in diabetes. The paper found a moderate effect for low carb diets for weight loss in diabetes patients, and of course that was the headline result that the mainstream media latched on to. But the I-square result showed that this result wasn't particularly reliable. There was too much variation between the studies, and therefore it couldn't conclude with certainty that low carb diets have an effect in diabetes patients over and above any other diet. Of course, the I-square wasn't reported, either because those reporting on the research had no idea what it meant, or they did and chose to ignore it. So now the reporting of one study has led to thousands thinking low carb diets in diabetes are the way to go, even though the result quoted isn't as strong as it sounds. This is where reporting on clinical research presents an issue. Those writing about it have a duty to report the important results that will affect a reader's interpretation, but they also have to understand the stats to begin with. The majority of healthcare professionals can, and will, critically appraise a paper. Even if you ignore results, they can usually smell when something is not quite right, although even they are not infallible. The lay public doesn’t have the same skills or the same desire to investigate further. They'll take what is given at face value. So don't mislead them - it's not easy to reverse a mindset driven by 'reporting' of the research.