Covering Science in Cyberspace

March 12, 2007

What is science?

Talk about getting back to the basics ... the first session of our gathering had nothing less than the theme of defining what science and writing about science is all about. There’ll be a comprehensive report provided later by the good folks from the Annenberg School, but this is just a quick rundown - aimed as much at testing the blog tool and providing a couple of links as anything else.

Don Kennedy, editor-in-chief of the journal Science, started out by noting the “terrible public confusion” over how science works. He ran through the main steps of the process - hypothesis, experimental testing of that hypothesis, efforts to falsify the hypothesis based on the facts, etc. He also provided a rundown of the types of research that Science finds most intriguing: work that confirms a hypothesis that’s important but hadn’t been confirmed before (say, global warming?), counterintuitive breakthroughs (say, the Chicxulub asteroid) and explorations of new territory that the editors “jump out of our shoes about.” Kennedy said the “secondary market [that is, journalists] is pretty good at picking out what matters and what doesn’t matter.”

Among the links to follow up, particularly if you want to address the ever-popular “theory vs. fact” debate, is the National Academies publication “Teaching About Evolution and the Nature of Science.”

Alex Witze, senior news and features editor at Nature, weighed in with her outsider-turned-insider view of the scientific publishing process. “Journalists in general do not recognize how sloppy science can be,” she noted. Peer review, for example, is “no inoculation against stupidity.” But there are ways to fight the madness.

She recommended getting up to speed on statistics, citing the classic work “News and Numbers.” ... and she recommended using a diversity of sources as your truth squads for scientific claims (not always turning to, say, John Pike, Keith Cowing and Art Caplan, for example).

She also put in a plug for Connotea, a social recommendation site for scientists a la Del.icio.us.

Michael Lemonick, a veteran of Time magazine, took on the “iconoclast” role: He cited the example of an astrophysicist who thought he discovered planets circling a pulsar - then, weeks before giving a big presentation, found a flaw that led him to reverse his views. He was persuaded to give the presentation anyway, and won a standing ovation for it.

Similarly, journalists should have a strong ethic of questioning what is thought to be known. “I don’t think we question ourselves enough,” he said. And in keeping with the iconoclastic point of view, Lemonick questioned whether having more science-friendly editors would solve the problem. “The proposition that ‘if only we could do it more, things would be better’ ... I think we’re kidding ourselves.”

In the discussion part of the session, we talked about the implications of a world in which embargo times were more fluid, or really hardly existed at all ... as well as some of the cautionary tales about going off half-cocked in science reporting. Some examples and links:

- The Arxiv site for physics papers, which is eroding the embargo paradigm.

- A controversial study about the origins of corn.

- Pyramids made out of concrete?

- The Bosnian pyramid.

By the way, all these links are purely plucked out by me - in some cases, recommended by presenters, and in some cases not.

March 12, 2007

Science is Messy

The question of the morning is “What is Science?”

Don Kennedy, editor of the journal Science, says “It’s a process.” 

“It’s a dirty process,” clarifies Alexandra Witze, news editor at Nature, “Science is quite messy.” 

As a graduate student in Neuroscience, pretending for the next few days to be a journalist, I can tell you firsthand that Witze is right.  Performing a scientific experiment often seems akin to making a soufflé without a recipe. It usually doesn’t work.  You make a hypothesis, you figure out how you might test it, and you try it.  It doesn’t work.  Not the first time, not the second time, not the third time.  You re-assess your experiment. You try it again, and again… and again. Perhaps, if you are lucky, you get a result. Perhaps it supports your hypothesis. Perhaps not. 

Someday, after several years of work, you might have enough data to publish a paper on a tiny piece of a puzzle, a small question within the world of science, which might elucidate something that has some small relation to some bigger question.  As a scientist, this is how I see science.

In journalism, it’s what’s new that matters.  An article that was published in Nature yesterday is something that scientists “just discovered.”  In fact, those scientists were probably working on that discovery for years, and have known for months the conclusion that today is “news.” 

Yet even when scientists have amassed enough evidence to support their hypothesis and publish in a major journal like Science or Nature, their findings still may be proven wrong. 

As Kennedy said, “Anything published in Science is ready for reversal.”

Or, as Witze put it, “You can publish in Nature or Science and it can be total crap.”

This idea of “science as process” creates a dilemma for those who need to report “science news.” They must share new findings with the general public.  They must understand the process behind it.  They must help convey the process in their writing, to fit a new finding into the context of the history of the field.  And, they must accomplish all this in 400 words. 

 

 

 

March 12, 2007

Science Reporting—the Messy, Warty Truth

Science is a road, not a destination.  Although the morning session was intended to define science (a challenge, to be sure), many participants stayed away from hard and fast definitions—probably because none exist.  Many alluded to the idea that science is a process, a messy, complicated, slow process.  To set the tone of the session, Donald Kennedy invoked Karl Popper’s theory of falsification.  Good science hinges on Popper’s ideas.  In other words, hypotheses can only be proven wrong, not right.  Unexpected results and setbacks mean that it’s working.
Alexandra Witze said that science is “messy and full of warts.”  This dirty, complicated business is a far cry from the pristine, monumental breakthroughs that are so often reported as such in the media.  How, then, does the true nature of scientific progress impact the way in which its stories are told?  Have the readers’ expectations been modeled after traditional news subjects, where there is always a clean punchline?  By trying to compress the grueling, convoluted path to scientific results into parcels of easily accessible blurbs, are we doing a disservice to the public? 
Scientists design questions to disprove their hypotheses.  However painful it may be, the experiments that prove their ideas wrong are necessary.  The ability to doubt themselves and continually question their ideas is what drives scientific progress.  Michael Lemonick implored this group f journalists to lose some of their self-assuredness and assume the insecurities that scientists know so well.  Science journalism cannot afford to be fat and happy—things are changing, and we must adapt.  The destination, if one exists, is far away, and unlike anything people imagined.  But thinking and talking about these issues—science on the internet, the merits of embargo system, shrinking reporting space—is an important stop on the road. 

March 12, 2007

Lemonick’s challenge

What are we after, here?

We want to engender a trust of scientific evidence, through an appreciation for the process that produces that evidence.  We want people to know why they believe what they believe. Many participants emphasized (or implied) that focus on scientific process is the way to achieve this, and I believe they’re right.

However, Michael Lemonick pointed out that there has been a great deal of excellent science writing that has done exactly this. Many writers have pointed out the fallibility/reversibility of science, stressed process over facts, and highlighted the difference between common and scientific usages of terms like “theory.” Many of the attendees have written stories conveying the passion and drama of science. Even with all of these great pieces of writing, half the population still disbelieves evolution and trusts astrology.

The proposition that if we could only produce more articles about science process, things would be different - Lemonick said - is kidding ourselves. People have read these stories, and the world hasn’t changed. Maybe more stories would have more effect - but what reason do we have to believe that editors will include more stories now, hearing the same arguments they always have?

KC Cole pointed out that there is such a thing as a tipping point. Continued emphasis can have an effect on editors and on the public consciousness. New media also provide a unique opportunity to collect thorough statistics on readership. Every click on a page can be counted, in a way that every page read by newspaper or magazine subscribers could not. These statistics could prove convincing to editors (and advertisers) that there ought to be more science in the media.

There seemed to be a general consensus of the types of stories that ought to be told. The questions that remain are how and where to tell those stories, and how to convince editors to run them. And Mr. Lemonick’s challenge should be kept in mind: how do we know we’re not kidding ourselves, telling good stories in the same old ways? How do we know that what we’re doing is truly New?

ABOUT THIS BLOG

This blog was written by prominent science journalists and science communicators who attended the Knight Digital Media Center Best Practices: Covering Science in Cyberspace seminar.

Recent Entries

Categories

Archives

Feed