Richard Harris has covered science, medicine and the environment for National Public Radio since 1986. He has traveled the world, from the South Pole and the Great Barrier Reef to the Arctic Ocean, reporting on climate change. The American Geophysical Union honored him with a Presidential Citation for Science and Society. In 2014, he turned his attention back to biomedical research and came to realize how the field was suffering. Too many scientists were chasing too little funding. That led him to take a year-long sabbatical at Arizona State University’s Consortium for Science, Policy & Outcomes to research and write Rigor Mortis.
+ Full Transcript
Rosemary Pennington: Reproducibility is an important part of the scientific process. Find something interesting, or compelling, or groundbreaking once, great, but for the work to have legs, other scholars have to be able to reproduce your work. If they can’t, then there’s a problem. The problem of reproducibility is the focus of this episode of Stats and Stories, where we explore the statistics behind the stories and the stories behind the statistics. I’m Rosemary Pennington. Stats and Stories is a production of Miami University’s Departments of Statistics and Media, Journalism, and Film, as well as the American Statistical Association. Joining me in the studio are regular panelists, John Bailer, Chair of Miami Statistics Department, and Richard Campbell, former and Founding Chair of Media, Journalism, and Film. Our guest today is Journalist, Richard Harris. Harris is an award-winning correspondent on NPR’s Science Desk, and has covered everything from SARS, to climate change, to the aftermath of the 2011 Japanese Tsunami. In 2017, Harris published the book, now out in paperback, Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billion. On that uplifting note, thank you so much for being here this morning, Richard.
Harris: It’s my pleasure. Good morning.
Pennington: Just to start us off, generally, what made you decide to write the book?
Harris: Well, I was interested in, sort of, what was going on in the world of biomedical research. I hadn’t been covering that particular topic for about a decade when I was asked to go back and do it again, and when I started noodling around, of course, being a journalist, one thing I decided to do is follow the money. And the money told me kind of an alarming story about the support for biomedical research in this country, which was that, between 1998 in 2003, or thereabouts, the amount of money for the National Institutes of Health, which is the major funding of biomedical research, that amount of money from the federal government doubled. And then Congress said, oh, we’ve done our job; we don’t have to worry about it anymore. And they basically flatlined the budget. And for the next 10 years, the numbers stayed flat, but of course, research got more and more expensive. So, in spending power, the amount of money that was available for biomedical research, declined by 20 percent. So, after this huge sugar rush of having all this new money and people were building labs all over the place, I thought, this can’t be good, and so I started reporting for NPR about what the consequences of that were, and that led me into this bubbling controversy – the bubbling issue that is sometimes called, “the reproducibility crisis”. I don’t particularly like that term, but it was out there and floating around. And so, I realized after doing some stories for NPR, there was a book in this. So, I took a year off to write the book.
Bailer: So, you don’t like the term reproducibility crisis? So, I’d like to give you an opportunity to rename it.
Harris: OK, yeah, that’s a good question. It’s easier to complain about something, of course, but the – I don’t like the word crisis, because in fact, it’s an awakening. You could call it a reproducibility awakening, that basically, these issues are not at all new to science. We’ve become aware of it, and I actually think that’s a good thing, despite the scary sounding title of my book, which by the way, I actually wanted to call, Science Friction, because I think this is slowing down science, but not stopping science, and is not a crisis. I think what we really are in a position to do now, is recognize that these are problems that can be addressed, that if we pay attention, we can do better, and we can accelerate the progress of science, instead of having these problems slow it down.
Campbell: You talked – I watched a Google Talk where you talk about science education and some of the problems with it, and could you talk about ways that that might be improved, and what some of those problems are, and maybe here is where you could talk about some of the problems we have with the pressure for academics to publish as well.
Harris: Right, well that’s a – multiple big questions, but yeah, I think part of what happens is, scientists go into the field, particularly, my focus here is on – is an area of science called, preclinical biomedical research. So, this is biomedical research. This is not, by in large, involving human beings, it’s mostly involving laboratory results, but these laboratory results provide all of the ideas for potential new drugs, new treatments, new ways of understanding disease. So, it’s an important part of research, but it’s this area that has really been clobbered by this funding problem I mentioned earlier. What it does, is it basically creates a very hyper-competitive environment for people. And the reality is, the education of post docs and graduate students, is not completely up to snuff. They – biologists, I think -- in the old days, biologists went in to biology because they didn’t have to do math. There’s an old saying that, if you have to do statistics, think of a better experiment. So, that was back in the day. I know that must warm your heart. [LAUGHTER] But the reality is that, biologist don’t get really solid grounding in math. It’s not part of the curriculum for most scientists, at least in this field, and nor do they really get great experience in – or sort of, in learning methodology, other than from their scientists, from whose labs they work in. So, basically, and of course, half of all biologists are below average, right, so – just by definition. So, this is an issue that the academic education of young scientists isn’t really robust enough. And the NRH actually said, oh, let’s take the best training out there for teaching methodology and stuff like that and replicate it across the country. So, they’ve put out a request for information, who’s got the best courses in this stuff? And they basically got crickets back. Nobody really had a course that they could identify as really being a really good, reliable, class on scientific methodology. People do things like saying, well, you know, ask, how many mice did you use in your study? They said, I used six. And they said, why did you use six? They said, well, that’s what everybody uses. And it’s not based on some math that you could do, to figure out what would be an appropriate sample size, considering what input you’re looking at, and so on. So, that’s the educational piece of this.
Bailer: I’d like to just rewind us back a little bit to talk about, kind of, the – you’re talking about this issue, and I don’t now that we’ve ever really, we’ve defined it yet, this idea of this reproducibility awakening. See, now I’m using your expression [LAUGHTER].
Harris: Good. All right.
Bailer: So, we’re rebranding as we speak here, Richard. So, as you think about this reproducibility issues and awakening, you identify some of causes, including funding, which you’ve already mentioned, but I’d like you to just take a minute and give us a quick definition of what you mean by this issue and then some of the reasons why it’s emerged.
Harris: Right, well, as Rosemary mentioned at the outset, science needs to be reproducible. You want to be able to have somebody else do your experiment, to make sure that your results are real, and this is not, unfortunately, rewarded in the world of science, very much. If you get rewarded for the big flashy results and not necessarily for trudging behind and doing equally important work, one could argue, which was, validating whether somebody else’s work actually holds up. People really want that shiny new result as a way of getting career advancement, funding for future projects, and so on. And, unfortunately, this really important part of science is not very well rewarded. And I will say that, just because paper is not reproduced or even somebody tries and can’t reproduce it, it's not, at that point, evident, necessarily, who’s right and who’s wrong, right? So, maybe the person who’s tried to reproduce it is not correct. So, this is a – science is an ongoing process, right? And in the long run, we eventually figure out what’s closer to the truth. But in the short run, there’s a bit of a fog of war, if you will, about information. What’s really right? What’s really not? People cling firmly to their own ideas and theories, and hopes and aspirations, and so on, and of course, those are very human emotions that all of us share, for no matter what field we’re in, but they can actually deceive the scientist inadvertently into believing something – to basically to see something that they want to see, but is not necessarily something that’s there. So, this is the scientific process is not without ups and downs, and backs and forth, and two steps forward, one step back, etcetera, and we’re seeing this play out here in reproducibility. And it’s hard to know exactly how much of this is going on, but there’s a lot of suggestions that in this area of biomedical research, could be about half of everything that’s published, actually, is not going to stand the test of time. It’s a pretty big number.
Campbell: So, in the academic journal field in this area, which is not my area, is there resistance to journals publishing reproducibility studies, because there’s not glamor in it? So, if you want to reproduce somebody else’s results, try to get that result, are the journals accepting of this, or is this an area where the journals have some culpability as well?
Harris: Yeah, it’s a mixed bag. I think scientists think that it’s much harder to publish this stuff than it is, and I think they may have given up trying, because the major journals, the science, and nature, and cell, which are sort of the marquee journals in this world of biology, those journals really don’t want to publish this kind of stuff, because they gauge their success based on something called an impact factor. And so, they average the impact of any given article, how many cite it and so on, and they know that these kinds of studies do not get a lot of citations, so they say, even if it’s an important study they figure, it’s not going to get many citations; it’s not going to help our impact factor, and that’s actually a systemic problem that’s pretty egregious, and really, it’s because they’re for-profit entities. They’re not here to serve the purpose of science, they’re here to make money, and they make lots of money. There are other journals though, that do publish these things. There’s a whole suite of journals called the, PLOS Journals, which are not-for-profit, and they do, actually, value this kind of stuff. And so, there clearly are places where you can get it published and get it out in the literature and hope people can find it. So, it’s not a black and white issue, but it clearly is a – it’s been a complication, and it’s been a driving factor in this as well.
Bailer: You know, it seems like some of the papers that have come out, whether it’s in psychology or in the sciences, where there’s been a failure to reproduce published work, have had a big splash.
Harris: That’s true. Some of those have had a big splash, and actually, probably the most notable paper in psychology, which was an attempt to reproduce about a hundred popular papers and was only really able to replicate about a third of them, that was actually published in science. So, if you get a flashy enough result, even if it’s about reproducibility, you can get it in a big journal, but that was a pretty exceptional study. It wasn’t just, sort of, a one-on-one, let’s see if we can reproduce an individual study.
Bailer: Although you mentioned some of the marquee journals, but some of the marquee journals – and whether they’re the biology marquee journals that you mentioned, or some of the medical journals, this idea of embargoing an individual study results and then releasing it with a lot of fanfare seems to really say that, boy, it’s better to be first to the finish line, and it really reinforces that.
Harris: Absolutely, it does, and that’s unfortunately, in an ideal world, you might want scientists to be more cooperative and less competitive, but on the other hand, that’s not the way we fund science. That’s not the way we grant tenure. That’s not the way that the system works. So, yeah, that’s – a lot of these issues are not because scientists are behaving badly. I think, by in large, scientists are not behaving badly, but it’s – I think the system sets us up, and it’s a culture that, unfortunately, creates a lot of these problems, and the solutions ultimately involve trying to find a way to correct this culture, but that’s a tall order, but people are working on it.
Pennington: You’re listening to Stats and Stories, and today we’re talking with NPR Science Correspondent, Richard Harris.
Campbell: I’m kind of – this is a show, Richard, that’s about the relationship between stats and stories, and you’re a journalist, and you talk about scientists building stories that approximate truth. I think you’re a critic – you believe that there’s no such thing as absolute truth, and I talk about the same thing when I teach journalism students about objectivity and its limitations, and what we ultimately have as journalists is storytelling, and how do you tell the, sort of, best story you can. Can you talk about how you think about storytelling in relationship to telling the general public and your NPR audience about science and the work that scientists do?
Harris: Sure. I mean, science is ultimately a human endeavor. It’s a quest for knowledge. It’s exciting. It’s interesting. It’s often, or it can be useful, with medical advances and things like that, but as you say, truth with a capital T, is a fraught topic, because science is always improving on our knowledge but is never reaching some idealized point of complete understanding, right? So, but that’s not to say it’s not useful, and or that conclusions should all be doubted. I mean, for example, I spent a lot of my career writing about climate change, and even though there’s – those are projections. So, you could never know for sure what the future’s going to be like. There are many lines of evidence that all point in the same direction, whether you’re looking at the history, or are understanding of how atmospherics chemistry works or up and down the line. You can build a very strong case that this is something that we should be worried about, and we should be taking steps to deal with. So, I want to distinguish between people who would say, well, if nothing is knowable, we don’t have to do anything. That’s clearly not the case, and I just remind people that if they go too far down that road, well, they certainly shouldn’t get on an airplane, because they would have no reason to really believe that physics will actually hold the airplane up for its entire flight. [LAUGHTER] So, yeah, but I do approach storytelling as a quest for the truth, as I can, and I ask people to tell their own stories about how they got interested in an idea, how they’re pursuing it, and I talk to other people about why they may not believe it, or what questions they have, and so on. And of course, strive never to say, well, this is the final, final, final word on anything, but some studies are stronger than others, and it’s part of my job to asses that as well, and say this is really pretty good evidence for this idea, but I certainly don’t use the word proof.
Campbell: So, tell us your story about how you got into this, and –
Harris: Well, let’s see, this – now you’re talking ancient history [LAUGHTER]
Campbell: Well, there aren’t enough Richard Harris’ in the world doing this kind of work. So, those of us who, in journalism, who know some of the problems that journalists have in telling these kinds of stories. So, I’m interested in that journey, how you got to do this.
Harris: Sure. Well, I was an undergraduate biology major, that’s my degree, and I love thinking about science, but I didn’t particularly like working in the lab, and didn’t -- even back then, this was in the late 1970’s, the idea of the rat race to get grants and so on, already seemed a little bit like less fun than actually just being able to spend most of my time just thinking about science. So, and science journalism is great, because you’re also not stuck into one particular idea or category. So, I’ve covered everything in science over these many years, but – so after I graduated from college, I went off and got a succession of jobs at -- first, at a little newspaper, then in a medium size newspaper, and I’ve been at NPR now since 1986. So, that’s why I say, this is ancient history, but yeah, it’s basically taking the sensibility of science with me and understanding how scientists think about science, really helps me tell these stories.
Bailer: So, now I’m curious to go from ancient history to the present. How has reporting on science changed over the course of your career?
Harris: Well, the internet was invented, that helped a lot [LAUGHTER]. Yeah, I remember back in the day when we were covering the announcement of the Nobel prize, and it would go to a book called, Who’s Who, and look people up in the library to see who they – that was step one to figure out, if I could figure out who these people were, but yeah, I mean, there’s just so much more availability of information, but some of the fundamentals remain the same. For example, when I’m looking at a study that’s been published and trying to decide whether it’s worth following up on, one thing I do is I look in the references and see what other studies have been quoted, and so on. Now, of course, it’s much easier to track down those individuals and figure out – we’ll even read many of those papers and figure out what was said, but it’s an idea that still is durable, which is that science builds on the shoulders of other people who’ve done work, and sometimes it involves people who disagree. So, it’s always nice to get the reference if they’re like, some people say my work is junk, you know, reference number six, and then I’m – oh, maybe I’ll call reference six and see what the backstory is. Although, of course, people don’t ever put it in that pointed a term. It’s like, there’s been some discrepancy about these -- interpretation of these findings. It’s like, that’s still reference six I’m going to it. But so, some of the reporting, the tools have changed but the basic reporting, I would say, has not.
Bailer: Yeah, certainly science has changed a fair amount in terms of things that’s emerged too. [LAUGHTER] Yeah, thanks for the internet note. [LAUGHTER] Yeah, I had noticed something like that too. Hey, I wanted to ask you, sort of as a follow-up on some of the earlier conversation related to the idea of talking about science and presenting science, and I think that the idea that there’s uncertainty, and that there might still be some things that remain to be known. My frustration is often that I think that results are reported with a false sense of certainty and precision, when in fact, there’s often – there’s more noise in the system, and I just wonder what you think about that, or what are some of the ways that you challenge yourself in terms of, how am I going to convey that this number isn’t a known with this exact sense that there’s some noise in it, some --
Harris: Yeah, let me take two swipes at this. First of which is that, editors always say, well, what should we tell people to do and that pushes you, particularly for health studies, that pushes you to say, well, this means you should eat more oatmeal or this means you should eat less oatmeal or whatever the finding of the day is, without recognizing that there’s so much noise, particularly in nutrition studies that, any single study really should not be the basis for behavior change with extraordinarily few exceptions. The other thing I do is, I really look at the strength of what’s being reported, and often scientists will like to couch their results in a way that make their results seem bigger than they are. They will often say, this doubled the risk of this event, and so we should take serious consideration of it, but I will unpack that and say, instead of looking at the relative risk, I will look at the absolute risk and say, it doubled the risk from, one in a million to two in a million. If that’s the case, it’s like, I’m still not going to be concerned about this. So, digging into the numbers is something I do myself, and I encourage all of my colleagues to do, to really make sure that you’re characterizing things, not in a way that just makes it seem as big and impressive as possible, which is often a temptation in a newsroom or anywhere else, and including in a scientific lab where they want their results to seem as impressive as possible, but I also think that, that’s not the best way for people to actually be able to embrace and understand the magnitude of what they’re talking about.
Campbell: In spite of the title of your book, Rigor Mortis, you’re very – you seem hopeful. You talk about a new generation of young scientists that are trained different. There’s a movement – open science movement out there. Even, I think, what the work that you do and the, sort of, public arena, has brought to light some problems that could open this up a little bit more. Talk a little bit about that.
Harris: Yeah, I think that this gets back to the reproducibility awakening, I think. Once people are aware of something like this, they can then ask, well, what tools can we use to fix this? And some of the tools are very simple. For example, I talk a bit in my book about, contaminated cell lines. People think that they’re working with cells in their Petrie dishes that are from, say, a breast cancer, when in fact, it's a completely different cell. It may be a melanoma cell. And people, back in the day when cell culture study started, people couldn’t really tell these things apart, but now there are power and not very expensive tools to actually go ahead and figure out the identity of those. And so, there’s now an expectation, actually, from the NIH that scientists should actually validate their cell lines and make sure that they’re working with the cells that they think they’re working with. So, that’s a simple step to help reduce the amount of noise that’s generated; accidental information that’s coming out. Not – again, not malicious on the part of the scientists but just unaware. And people are thinking about this in terms of all sorts of other things in, you know, whether it’s thinking about better ingredients, or thinking about -- more carefully about what statistical methods they should be using, or experimental designs, to make sure that they are setting up an experiment to get a credible and meaningful result. And it – that’s actually one thing that encouraged me as I was doing my reporting, was to realize that there was actually a fairly similar crisis to this back in the 1980’s and into the 1990’s around clinical medicine. These are studies involving human beings, and many of these studies were too small. They basically did not – even if they were finished, they didn’t produce results that were credible, because they were – the experiments weren’t designed well, and so on. And sometimes people would change their target halfway through. They would say, I’m studying this, and if it turns out that that wasn’t coming out well for them, they’d say, oh, well, I meant to say I’m studying something else, and they would publish the studies. So, but there’s been a lot of progress over the years to address those issues in clinical research, and it’s certainly – those problems still persist, but they are much less present than they were before. These studies are designed carefully. Most importantly, or one of the most important things is that, scientists have to register their studies in a database called, ClincalTrials.gov, often and this basically says, up front, here’s what my design is, here’s what I’m looking for, for my endpoints, so if I change my endpoint, you will know that I have been monkeying around with the scientific process here, perhaps inappropriately, and ultimately, here are my results, although, many people actually ignore the dictum that they must publish their results, even if they’re negative results, they’re supposed to put them into ClinicalTrials.gov, so people can say, well, here was a failed experiment, even if they don’t feel like ever publishing their results, they are accessible to people who care to dig for them. So, that’s an idea that has proven quite useful in clinical medicine, and people are experimenting with that in other areas in biomedical – the preclinical area of biomedical research and in social sciences and so on, and saying, hey, this is an idea that we should be adopting as well, and that’s starting to happen.
Pennington: What advice would you give for a young journalist who wants to be a science reporter? We, particularly here in our journalism program, sometimes have students who are scared of statistics is the nice way, I think, to say it. So, how would you – what advice would you give to a reporter who wants to cover science and cover it well, about how to do that?
Harris: Yeah, well, I think it helps a lot to really understand how science works. So, I think that – having an undergraduate degree, at least, in an area of science, is a valuable idea. We are increasingly seeing in this field, people with PhD’s, because they go through the entire process I described earlier in academics science and they realize, the rat race is too much, or I can’t find a job in academia after this, and so then they say, well, what other career could I have, and some people choose science journalism. So, we see some, actually, PhD’s now in this area, including a couple of my colleagues at NPR have PhD’s. And it’s a – so, I think, understanding that, I think, obviously, if you’re really afraid of numbers, I think science is probably not a really good choice for you as a coverage area, or business, I would add, but also know that many people have had long and successful careers, even though they may have struggled to figure out – how to calculate a percentage. So, it’s not an ironclad thing, but I think it’s an incredibly power, useful tool, and I think it makes people better journalists if they can do that. It doesn’t necessarily mean you have to get a degree in biostatistics or something like that, but to be comfortable enough not to be scared off by looking at the way science is published, which includes look at the statistics that include – that are in a paper and knowing how to, for example, figure out what the absolute risk is, as opposed to just a relative risk, which, as I mentioned earlier, I think is a very important way of characterizing the strength and the importance of findings.
Bailer: You know, I’m delighted to talk to someone who titles a piece they worked on, Statisticians' Call to Arms: Reject Significance and Embrace Uncertainty! I’ll join you in that parade. So, one thing I wanted to ask you about though, that sort of follow up on telling stories about pretty complicated methods and situations was a piece you did on artificial intelligence, predictive modeling, ad health screening. I’m thinking that you’ve got some real challenges to try to weave that together, and I was trying to think about preparing us, maybe it’s a stats student to tell this story or a journalism student to tell this story. So, what are some of the recommendations of how you approach such an interesting and challenging story?
Harris: Yeah, well, I guess the first step is to find the story, right? I mean, we know that artificial intelligence is a, as a topic, is mushrooming, and there’s no end of people, I probably get three press releases a day from people saying, oh, we have a new AI device that does X, Y, Z. So, there’s no shortage of claims and hype, I will say, around artificial intelligence, but my challenge is, find human beings who are engaged in this, either scientists who are doing interesting work, or individual human beings who are being screened by a device that is, basically, instead of a doctor is an artificial intelligence agent, and dig into it. And then ask – you know, look around, find the people who are excited but also find the people who will raise notes of caution about these things, and then of course the goal is to balance that, to make sure that both are captured and given their due weight. It doesn’t mean that equal weight is necessary, but that’s part of the judgement of the journalist is figuring out, do you believe the skeptics more than you believe the advocates or the other way around and act accordingly.
Pennington: Richard, that’s all the time we have for this episode of Stats and Stories. Thank you so much for being here.
Harris: It was my pleasure.
Pennington: Stats and Stories is a partnership between Miami University’s departments of Statistics and Media, Journalism, and Film, and the American Statistical Association. You can follow us on Twitter, Apple Podcast, or other places you find podcasts. If you’d like to share your thoughts on the program, send your email to StatsAndStories@MiamiOH.edu or check us out at StatsAndStories.net, and be sure to listen for future editions of Stats and Stories, where we explore the statistics behind the stories and the stories behind the statistics.