2.13.2014

cherry-picked data and undisclosed bias: the failure of freakonomics

Allan came home from one of his used-book sale sprees with copies of both Freakonomics and Super Freakonomics. I had read so many excerpts from, and reviews of, these books over the years, and their appearance was a reminder to actually read them myself.

You're probably familiar with the general premise of Freakonomics. Steven D. Levitt is an economics professor at the University of Chicago, and Stephen J. Dubner is a well-known writer and editor. The two teamed up to write an unusual mix of story, statistics, and surprises for a popular audience, using research and statistics to draw unusual conclusions. Freakonomics' stories challenge conventional wisdom and seek to demonstrate how we often ask the wrong questions, thereby drawing the wrong conclusions.

Freakonomics is easy to read, and I found the stories entertaining and interesting enough, but every so often, an inaccurate word or phrase would jump out at me - a broad assumption would be asserted, without evidence - a bias would be exposed, but not stated. At first I thought I was nitpicking, but as these trouble-spots added up, I came to doubt the validity of the authors' work altogether.

Correlation versus causality in the unconventional wisdom

Early on, Levitt and Dubner remind us of the difference between correlation and causality.
... just because two things are correlated does not mean that one causes the other. A correlation simply means that a relationship exists between two factors - let's call them X and Y - but it tells you nothing about the direction of that relationship. It's possible that X causes Y; it's also possible that Y causes X; and it may be that X and Y are both being caused by some other factor, Z.
Levitt and Dubner say that conventional wisdom often confuses correlation with causality, or assumes causality where none may be present. I agree. Unfortunately, their proofs often do exactly the same thing.

You may recall the Freakonomics highlight that created a huge amount of buzz: the authors revealed a correlation between the precipitous drop in violent crime in the US in the mid-to-late 1990s, and the legalization of abortion in 1971. According to their analysis, the conventional explanations for the decrease in crime - better policing methods, tougher sentencing laws, and so on - were merely coincidental. The real reason for the drop in crime was that fewer unwanted babies were born.

After showing us many statistics about abortion rates and about crime rates, they write:
What sort of woman was most likely to take advantage of Roe v. Wade? Very often she was unmarried or in her teens or poor, and sometimes all three. What sort of future might her child have had? One study has shown that the typical child who was unborn in the earliest years of legalized abortion would have been 50 percent more likely than average to live in poverty; he would have also been 60 percent more likely to grow up with just one parent. These two factors - childhood poverty and a single-parent household - are among the strongest predictors that a child will have a criminal future. Growing up in a single-parent home roughly doubles a child's propensity to commit crime. So does having a teenage mother. Another study has shown that low maternal education is the single most powerful factor leading to criminality.

In other words, the very factors that drove millions of American women to have an abortion also seemed to predict that their children, had they been born, would have led unhappy and possible criminal lives.
The authors then note in passing that legalized abortion brought about many social consequences, and they list some, including a sharp drop in the number of white, American-born babies available for adoption. (This, in turn, gave rise to an increasing in international adoptions.)

Here's what Levitt and Dubner do not say. Since the legalization of abortion in the US coincided exactly with a marked decrease in (white) babies available for adoption, it is highly likely that many of the women who chose to terminate pregnancies (after abortion was legalized) would have surrendered their babies to adoption (before legal abortion). Therefore, those children would not have been raised in poor or single-parent families, since adoptive families are highly unlikely to be either. This is still true today, but was even more true in the 1970s.

This important qualifier was omitted from the Freakonomics equation. In other words, the authors demonstrate a correlation between legalized abortion in 1971 and a drop in crime in the mid-1990s, but in trying to prove causation, they cherry-pick the evidence. Is it possible that the big bombshell revealed in this book, a correlation between legalized abortion and crime, is not causal after all?

The more I read Freakonomics, the more I had the nagging feeling that Levitt and Dubner do exactly what they tell us the wrong-headed, knee-jerk, and short-sighted among us do. They don't flat-out assume causation, but neither do they examine factors that disprove their theory. Instead, they posit a question that challenges the status quo, then find the evidence they need to prove it.

Language matters... and so does full disclosure

Word choices troubled me. Rhetorical questions angered me. And undisclosed conflicts of interest call integrity into question. Here are a four examples: on abortion, public schools, white-collar crime, and sexual assault.

Abortion. When writing about abortion, Levitt and Dubner use the expressions "pro-choice" and "pro-life". As you know, I believe the term "pro-life" has no place in good journalism, except in the name of organizations or in a quote. It is one of the most successful pieces of propaganda of all time, and a journalist who uses the expression has agreed to be manipulated.

Perhaps the authors felt that if they used the term "anti-abortion," they would be forced to also write "pro-abortion," which of course is not the same thing as pro-choice. Or perhaps I am being overly generous: the section on the link between abortion and crime contains many unattributed, "some experts feel" statements about the "violence" and "death rate" of abortion, although their general conclusion is that government should let women decide what to do with their own pregnancies.

Whatever their bias, the language solution is simple. It requires the addition of only one word: pro-abortion-rights, anti-abortion-rights. Or pro-legalized-abortion, anti-legalized-abortion. In other words, word choice that accurately describes, rather than adopts, a position.

Public schools. In a segment examining cheating on standardized tests, the authors claim to prove that some Chicago public-school teachers helped students cheat, and insinuate that such cheating is supported by teachers' unions.

The mention of unions seemed so strangely out of place - a completely gratuitous shot - that I searched online to see if there was a connection. I quickly found it. Levitt was involved in the drive to privatize the Chicago school system, which of course includes union-busting.

The very question Freakonomics asks, "Do school teachers cheat on standardized testing?", is itself biased: it is a weapon wielded by the movement to discredit public schools. The discredited public schools are then replaced by so-called "charter schools" - schools run by private, for-profit companies. (Test scores at these private schools are often higher, because students who can't keep up are simply expelled - more cherry-picked statistics.) This push to privatization is itself linked to Levitt's Chicago-school, neoliberal economics.

Levitt tells us that the data gleaned from standardized test scores proves that some teachers were cheating. But he doesn't tell us that the reason he examined the data in the first place was to find (or manufacture) evidence against public schools and teachers' unions, in support of privatization.

A professional writer like Stephen Dubner knows that this connection must be disclosed. But he does not disclose it.

White-collar crime. In a paragraph about white-collar crime, Levitt and Dubner ask:
A street crime has a victim, who typically reports the crime to the police, who generate data, which in turn generates thousands of academic papers by criminologists, sociologists, and economists. But white-collar crime presents no obvious victim. From whom exactly did the masters of Enron steal?
Really, Steven Levitt, free-market economist? You really don't know from whom the Enron execs stole? Perhaps you should consult Wikipedia. Emphasis mine.
Enron's shareholders lost $74 billion in the four years before the company's bankruptcy ($40 to $45 billion was attributed to fraud). As Enron had nearly $67 billion that it owed creditors, employees and shareholders received limited, if any, assistance aside from severance from Enron. To pay its creditors, Enron held auctions to sell assets including art, photographs, logo signs, and its pipelines.

In May 2004, more than 20,000 of Enron's former employees won a suit of $85 million for compensation of $2 billion that was lost from their pensions.
In the opening paragraphs of this post, I refrained from identifying Levitt's affiliation with the Chicago School of Economics, the free-market-worshipping, public-sector-hating cabal whose political cronies have caused untold suffering around the globe. I wanted to give him the benefit of the doubt. Benefit hereby withdrawn.

Sexual assault. In a paragraph about statistics that become accepted as common knowledge, but which have no basis in fact, Levitt and Dubner write:
Women's rights advocates, for instance, have hyped the incidence of sexual assault, claiming that one in three American women will in her lifetime be a victim or rape or attempted rape. The actual figure is more like one in eight - but the advocates know it would take a callous person to publicly dispute their claims.
Let's pause here while we imagine my eyes popping, my teeth gritting, as I force myself to put down the book and breathe deeply...

Fact: the one-in-eight figure is an FBI statistic. It counts rape and attempted rapes that are reported to a municipal police department. How many sexual assaults are not reported to the police? Estimates range from 50% to 70%. Most reported rapes are those perpetrated by strangers; most so-called date or acquaintance rapes are not reported. How likely is a girl or woman raped by someone she knows - a date, an acquaintance, an ex-husband - to go to the police? Estimates range from a high of 30% to a low of 5%.

The FBI's one-in-eight figure does not include violent sexual assault where no intercourse or attempted intercourse occurred. The one-in-eight figure does not include rape-murders. If a woman is raped and murdered, the crime is entered into the Uniform Crime Reporting figures as a murder, only. Statistically, the rape does not exist. You see where I'm going here.

In truth, I cannot say where the one-in-three or one-in-four figure originated. I believe they are based on many different data-collections over a long period of time, and an extrapolation about unreported rapes. But I can tell you this: the FBI's one-in-eight is merely a piece of the picture. Levitt and Dubner write, "...but the advocates know [that no one will challenge the statistics]". How, may I ask, do they come to this conclusion? Did someone in the anti-violence movement actually tell them, "I know these figures are false, but who's going to challenge me?" Not likely. It's much more likely they are making an unfounded assumption.

Ignoring their own central premise. Economics, Levitt and Dubner tell us, is based on this premise, asserted as fact: "Incentives are the cornerstone of modern life." Thus they look for hidden incentives as the key to solving various riddles. Yet when they show, for example, that most people (about 87%) don't steal and don't cheat, even when they are highly unlikely to get caught, they never explain what well-hidden incentive causes this cheery result. Because you know what? There just might be human behaviour that is not attributable to economics.

I have other examples, too, but this post is long enough. I love books that make complex ideas accessible, but not at the expense of accuracy. I used to write nonfiction for children, and I know it can be difficult to avoid reductionism. But in a book that claims the conventional wisdom is often wrong, and that better decisions can be made by looking at better statistical evidence, the authors must follow their own mandate, and be both thorough and precise. Levitt and Dubner challenge what they say is wrong-headed conventional wisdom, then they create their own wrong-headed conclusions, using whatever statistics get them there.

Criticisms from their own field

Looking online for criticism of Freakonomics, I found a series of heated exchanges - and an onslaught of posts repeating bits of those exchanges, out of context - that occurred a few years ago. (I was in graduate school at the time, ignoring much of what went on in the world.) This is not newsworthy, but it can't hurt to revisit an internet brouhaha long after the dust has settled.

Writing in American Scientist in 2012, Andrew Gelman and Kaiser Fung gave a long, detailed critique of what they saw as sloppy, reductionist thinking from Levitt and Dubner, who by that time were the pilots of a high-flying media brand. In Freakonomics: What Went Wrong, Gelman and Fung write:
As the authors of statistics-themed books for general audiences, we can attest that Levitt and Dubner’s success is not easily attained. And as teachers of statistics, we recognize the challenge of creating interest in the subject without resorting to clich├ęd examples such as baseball averages, movie grosses and political polls. The other side of this challenge, though, is presenting ideas in interesting ways without oversimplifying them or misleading readers. We and others have noted a discouraging tendency in the Freakonomics body of work to present speculative or even erroneous claims with an air of certainty. Considering such problems yields useful lessons for those who wish to popularize statistical ideas.
They then offer numerous examples, and say they have many more. Their story was picked up by many blogs and other outlets (although that echo would be dwarfed by a later controversy). The Freakonomics authors responded with Freakonomics: What Went Right, a long, rambling piece that lumps together both founded and unfounded criticism, and never really responds to Gelman and Fung's central concerns.

Gelman then responded on his own blog, with A kaleidoscope of responses to Dubner’s criticisms of our criticisms of Freakonomics, a thoughtful meta-type piece. He writes, in part:
Dubner lives in different worlds than those of Kaiser and me. (Levitt is in between, with one foot in the publishing/media world and the other in academia.) To the millions of readers of his books and blogs, Levitt and Dubner are the kings (and rightly so, they've done some great stuff), and Kaiser and I have the status of moderately-annoying gnats.

But I suspect Dubner realizes that, outside of his circle, he and Levitt have some credibility problems. They have fans but a lot of non-fans too. As I wrote a couple months ago:
About a year ago, I gave my talk, "Of Beauty, Sex, and Power," at the meeting of the National Association of Science Writers. At one point I mentioned Freakonomics and the audience groaned. Steve Levitt is not a popular guy with this crowd. And that's the typical reaction I get: "Freakonomics" is a byword for sloppy science reporting, it's a word you throw out there if you want an easy laugh. Even some defenders of Freakonomics nowadays will say I shouldn't be so hard on it, it's just entertainment.
Now go back a few years. In 2005, Freakonomics was taken seriously. It was a sensation. Entertaining, sure, but not just entertainment—rather, the book represented an exciting new way of looking at the world. There was talk of the government hiring Levitt to apply his Freakonomics tools to catch terrorists.

That's what Kaiser and I meant when we asked "What went wrong?" Freakonomics was once a forum for a playful discussion of serious, important ideas; now it's more of a grab-bag of unfounded arguments. There's some good stuff there but seemingly no filter.
This is what I'm talking about. When a roomful of science reporters treats you like a punch line, the problem isn't with statisticians Gelman and Fung, or with economists Ariel Rubinstein and John DiNardo, or with bloggers Felix Salmon and Daniel Davies (to name several people who have published serious criticisms of Freakonomics). There are deeper problems, some clue of which might be found by reading all these critiques with an eye to learning rather than mere rebuttal. Don't get distracted by your fans on the blog—consider that room full of science writers! Try to recover the respect of Felix Salmon and Daniel Davies; that would be a worthy goal.
And then there's climate change

In Super Freakonomics, the 2009 follow-up to the original book, Levitt and Dubner take what they call "a cool, hard look at global warming". In that segment, they acknowledge the widespread scientific consensus that the earth is getting warmer. They bemoan the difficulty of persuading humans to act in sufficient numbers when the incentives for change are abstract and in the future. And they agree that humans should stop consuming and polluting so flagrantly, and should live more sustainably. They do all those things.

However, they challenge some of the accepted wisdom of how we can best achieve that worthy goal.
That is all.

Yet this small and reasonable poke at conventional wisdom was, apparently, picked up by an environmental blogger and translated into: "OMG Freakonomics authors deny climate change!!1!!".

A shitstorm of posts and tweets ensued. Levitt and Dubner were branded Enemies of the People. And the wingnuts, the anti-environment climate change deniers, in turn, had a field day. Look how the tree-huggers react when you question their orthodoxy! All hail Freakonomics, who have dispelled the myth of climate change! Which, of course, they did not do.

Which leads me to my second, less important Freakonomics-related post: what's wrong with the internet.

About Freakonomics itself, I'd say that fuzzy thinking, imprecise language, undisclosed conflict of interest, and especially the use of statistics without explanation or context (as in the sexual assault example) call into question both the seriousness and validity of this book.

14 comments:

James Redekop said...

I haven't read Freakonomics, but this pretty much confirms the wariness I've felt every time I've read about the book's claims. Unfortunately, it's not an uncommon kind of popular scienceish writing. I get the same feeling from Malcolm Gladwell.

laura k said...

Malcolm Gladwell! I do not understand his popularity and his position as a cultural opinionator. His subject material seems incredibly small, intuitive, and relatively useless.

Although at least he appears to do no harm. These Freakonomics guys, if taken as Truth, can.

James Redekop said...

Gladwell's not entirely harmless. He's very fond of "If you do X, then Y will happen" pronouncements, which leads to people doing X in order to get Y -- even if there's no real reason to think X will produce Y except it's written in The Tipping Point. And then Y doesn't happen, even though they've wasted a whole lot of effort on X.

One of his questionable hypotheses which hits close to home is his claim that dyslexia is good and advantageous because it makes people successful. Evidence: there are successful people who are dyslexic.

Never mind that there are not only many more unsuccessful people who are dyslexic, many of those people are unsuccessful specifically because dyslexia has hampered their ability to learn -- compounded by going through an educational system which, until recently, dismissed dyslexia as "being slow" and still has problems dealing with it.

Salon put him him tenth on their 2013 Hack List.

(Naturally, I misspelled dyslexia when doing the search for "Gladwell dyslexia" to make sure I remembered his argument correctly)

M@ said...

Yep, sounds exactly like Gladwell to me too. I have stayed away from Freakonomics as well.

There has been a lot of talk in recent years about why good science journalism is so unsuccessful, and bad science journalism so prevalent; this is the other side of that coin, pseudo-science journalism that is insanely popular. I wonder if the current Upworthy/Buzzfeed/Viralnova/etc clickbait trend is the next step from this: "everything you know is wrong!" is a pretty compelling frame for shitty, mundane ideas, it seems.

By the bye, some people have been gunning for Malcolm Gladwell for some time, and see him as doing a lot of harm. (I'm not as convinced as they are, but I'm on their side.) And oh look, Levitt has a profile with them too.

laura k said...

Thanks for the info on Gladwell! I should not have assumed he was harmless. Hell, I never imagined Freakonomics was harmful, either.

Those click-baits - "Wait til you see what this mother of 3 learned at the grocery store! The answer will shock you!" - are SO irritating! If you won't tell me what you are, I'm not clicking on you. Life's too short.

James Redekop said...

The sad thing is, there's a lot of great science writing out there. Neil DeGrasse Tyson's just the most famous of may authors writing excellent stuff these days. Simon Singh, Mary Roach, Phil Plait, Ben Goldacre, Jon Ronson, Maggie Koerth-Baker, etc, etc. And those are just names with multiple titles in my ebook library.

laura k said...

Your statement applies to every single type of genre. There is excellent writing of every type, and it is rarely - if ever - the most popular in its genre.

Mass popularity usually requires a sacrifice of quality and depth.Not always, but usually.

And not just in writing, of course. In every art.

James Redekop said...

Sturgeon's Law in action

laura k said...

Absolutely.

Allan and I have been saying that as least as long as we've known each other, and I imagine many other people who value quality have been saying the same thing, for decades, for centuries, never imagining that one day someone would attach a name to it - as if Sturgeon's thought was original.

I probably said words to this effect the last time you brought up this Sturgeon! :)

M@ said...

Of course there's plenty of good science writing out there. I'm talking about science journalism. In valid science, when you take any given statement, the one thing you can bet on is that it's a lot more complicated than that. Whatever the statement, whatever the science, things are more complicated than you can cover in a sixty-second bit on the CBC or CNN, or in a 150-word box in the newspaper.

But science journalism won't and can't concern itself with that. So a study is published, the journalist boils down all that complexity into an essentially clickbait title. ("We're one step closer to cold fusion." "Left-handed people are more prone to strokes." "Eight cups of coffee a day can prevent colon cancer.") The journalism is a necessary evil for the scientists -- public interest equals funding -- but the public is actually less informed in the process.

And on clickbait, I'm thinking of starting to reply on Facebook posts with affirmation -- "You won't believe what this turtle did!" and I reply "I DO NOT BELIEVE WHAT THAT TURTLE DID." It would amuse me but it would probably lose me a lot of friends. Well, maybe it's for the best.

laura k said...

So true, and good examples.

It may have something to do with the fact that people with the brains, talent, training (whether self-taught or classroom-taught), and drive to do quality journalism want to be paid for their efforts. And very very few venues value those services enough to pay for them.

Good journalism takes time, too. It's faster and cheaper to republish a press release.

I'm sure there are other factors, but the almost total absence of well-paid journalism jobs must figure prominently.

James Redekop said...

Of course, one of the biggest problems with science journalism these days is that it is rarely done by science journalists. Most news outlets have gotten rid of all their dedicated science reporters and just hand science stories off to their Lifestyle correspondents & the like.

laura k said...

True. And even worse, most journalism is being done by interns or inexperienced stringers, because the jobs don't pay. Inexperienced is fine for simple reporting, that's how you learn, but for complex issues, it does not work.

James Redekop said...

More Gladwell problems: the 10000 hours number he loves to cite isn't supported by more recent research.