Friday, October 22, 2010

Reviewing the Evidence

I know I’ve complained more than once (say, here or here) about problems with how medical research is conducted and paid for. The problems I’ve occasionally pointed to are known, if not always discussed. The consequences could be quite serious. You have to wonder who’s paying attention.

One answer is John Ioannidis and those who study with him. Dr. Ioannidis of the University of Ioannina in his native Greece is a meta-researcher, a researcher whose expertise and interest is in critiquing the work of other researchers. He is profiled by David Freedman in the latest Atlantic Magazine, as one of their "Brave Thinkers," and the article is well worth reading.

Dr. Ioannidis has combined his expertise in medicine with a talent for math to look closely at what some of his peers have presented and gotten published. When he looked, he had legitimate concerns about what he found. In one review published in JAMA, the Journal of the American Medical Association, he reviewed 49 well regarded studies, 45 of which stated that they had demonstrated effective therapies. He then discovered that 34 of those 45 studies were replicated, and of those 34, 14 were either “wrong or significantly exaggerated.”

While the standard concerns about funding for research biasing the questions, and therefore the results, Dr. Ioannidis suggests another. As Freedman describes it,

Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.

We might consider this the “headline bias:” as is commonly said in the news business, “if it bleeds, it leads,” and the sensational is more likely to get attention, even apparently in peer-reviewed journals. Caught between pressure to publish on the one hand, and the general need to fit in with one’s professional peers, both researcher and reviewer can get caught up in highlighting the new and different, even if there’s better information in the replication of earlier studies, whether to affirm or refute them.

The profile of Dr. Ioannidis is an interesting article, and it highlights explicitly the importance of this meta-research. We claim early and often that we want to make our decisions based on evidence. The assumption, of course, is that the information we have is accurate and dependable – that is it, in fact, evidence. If we find it isn’t – when we find it isn’t – we need to take one of those most difficult steps in human experience: we need to take a step back and think again.

No comments: