John Hinderaker points out how the latest study of the relation between gun control laws and homicide rates uses the statistics therein to make a certain policy point, and how they could just as easily be used to refute that point:
…[W]hat jumps out at you when you read Fleegler’s article is that the decrease in fatalities that he documents relates almost exclusively to suicides. What his study really shows is that strict gun laws have little or no impact on gun homicides…
If you do the math, the ten “top” states, i.e., those with the most controls on guns, averaged 3.2 gun homicides per 100,000 population, while the ten “bottom” states averaged 3.5 gun homicides per 100,000. So the rate was slightly higher in the least regulated states. But that is only because Louisiana is an outlier–it has the highest homicide rate of any state, while it also has relatively few gun statutes…You can see what an anomaly Louisiana is. If you take Louisiana out of the equation, the remaining nine lowest-regulation states have an average gun homicide rate of 2.8 per 100,000, which is 12.5% less than the average of the ten states with the strictest gun control laws…
But there is more: note that Fleegler’s study covers all 50 states, but leaves out the District of Columbia. Why do you suppose he chose to do that? Because the District has 1) some of the nation’s most draconian gun laws, and 2) the highest murder rate in the country, higher even than Louisiana’s.
Back when I was in graduate school, even though I was in a clinical master’s program and not an experimental one, we were required to take statistics at the graduate level, which is significantly more difficult than the statistics course at the undergrad level through which I’d already plowed. What’s more, we had to learn how to design our own research and to closely read and critique that of others. These studies were, of course, in social science fields; we weren’t critiquing physics (nor were we equipped to). But I developed the ability to skim a social science study—in psychology or sociology, primarily, or even medicine, which has some of the same characteristics—and the flaws would rather quickly jump out at me.
Those flaws were always present, and they were not trivial. Sometimes I would suspect that the author[s] had succumbed through bias, and sometimes merely through the inherent difficulty of designing such studies for human subjects, with all their built-in limitations in terms of controls and confounding variables that were either inadequately dealt with or not taken into consideration at all. I learned that one has to be very, very wary, and that such scholarship can be used to almost any purpose.
But used, it will be. Despite the work Hinderaker has done to expose the flaws in this one, that message will not get to nearly as many people as will the message the CNN headline gives the study.