My fellow SBer Craig Hilberth at the Cheerful Oncologist writes about ameta-analysis that purports to show the positive effect of intercessory prayer. Neither Craignor I have access to the full paper. But what we know is that the claim is that the meta-analysisshows a result of g=-0.171, p=0.015.
This really ticks me off. Why? because g=-0.17 is not significant. Meta-analysis generally considers g=0.20 to be the minimum cutoff for statistical significance.
Briefly, what is meta-analysis? The idea of it is, suppose you've got a bunch of studies of the same topic. Meta-analysis lets you take data from all of the studies in the group, and attempt to combine them. What you can do is get aggregate means and standard deviations, and measures of the significance and reliability of the aggregate measures.
Meta-analysis is a useful technique, but it's very prone to a number of errors. It's very easy to manipulate a meta-analysis to make it say whatever you want; and even if you're beingscrupulously honest, it's prone to sampling bias. After all, since meta-analysis is based oncombining the results of multiple published studies, the sample is only drawn from the studies that were published. And one thing that we know is that in most fields, it's much harder to publish negative results than positive ones. So the published data that's used as input to meta-analysis tends to incorporate a positive bias. There are techniques to try to work aroundthat, but it's hard to accurately correct for bias in data when you have no actual measurementsto tell you how biased your data os.
So getting back to the meta-analysis results that they cited, what's g? g, also called "Hedges g", is a measure of how much the overall data set of the combined studies differs from the individual data sets means. G is a measure of the significance of any aggregate result from combining the studies. The idea is, you've got a bunch of studies, each of which has a control group and a study group. You compute aggregate mean for both the study and control groups, take the difference, and divide it by the aggregate standard deviation. That's g. Along with G, you compute a P-value, which essentially measures the reliability of the g-figure computed from the aggregate data.
Assuming a fixed events model - that is, that the studies are essentially compatible, and all measuring the same basic events - the minimum level at which g is considered significant is |g|=0.2, with a minimum p value of 0.05.
This meta-analysis has |g|=0.17, with a p-value of 0.015. So they're well-below the minimum level of statistical significance for a fixed events model meta-analysis, and their P-value is less than one third of the level at which a |g|=0.2 would be considered significant.
So - what it comes down to is, they did a meta-analysis which produced no meaningful results, and they're trying to spin it as a "small but statistically significant result". In other words, they're misrepresenting their results to try to claim that they say what they wanted them to say.Read the comments on this post...