Let’s be pragmatic about statistical mistakes
Tom Sigfried opened his talk with, “It’s time to stir things up.” Balance, he said, is a bad idea: it’s a journalist’s responsibility to weigh the evidence and draw a conclusion. Merely citing sources leads to the implicit equation of good science with bad or pseudo- science. Science writers should strive for reliable information above all else. That necessarily involves a deep knowledge of the field.
Which raises the question:
Whose responsibility is it to make sure that the public gets reliable information?
There certainly should be accountability at every level for mistakes made. But if the public at large, undergrad psychology majors, and in many cases the scientists themselves can’t grasp statistics, how can journalists be expected to catch everyone else’s oversights?
Practically speaking, the battle to educate science writers to evaluate all the evidence themselves may not be worth fighting. Asking journalists to know more than scientists about their topics is certainly unreasonable. In some sense, they have to be able to trust their sources, or at least the consensus among many of their sources.
Adam Frank noted that it must be difficult for journalists to know which scientists to trust. It takes scientists themselves a while to learn who rushes to publication and whose work is consistently careful and good.
Alexandra Witze rejoined that there are at least a couple of areas in which journalists can and ought to have a couple of trusted sources (climate change, stem cell research).
Charles Petit pointed out that these problems with properly weighing the evidence provide journalists with a chance to emphasize the self-corrective nature of science. Science journalists should embrace botched articles as a chance to tell people what science is all about: finding and fixing mistakes.
Posted in Science journalism | Resources Comments (0) • Permalink • Tell-a-Friend