It’s publish or perish, Jim – but not as we know it

perishOr to put it another way: The scientist’s dilemma . . . Where to publish?

Let me explain.

It’s autumn 1982. And just over a year since I joined the faculty of The University of Birmingham. Our department had a new Mason Professor of Botany, someone with a very different academic background and interests from myself.

At one departmental coffee break several of us were sitting around discussing various issues when the topic of academic publishing came up.

“In which journals do you publish, Mike?” the new head of department asked me. 1355408371_883_00_800I told him that I’d published several papers in the journal Euphytica, an international journal covering the theoretical and applied aspects of plant breeding. It’s now part of the Springer stable, but I’m not sure who was the publisher then.

His next question surprised me. It’s not an exaggeration to say that I was gob-smacked. “Is that a refereed journal?” he asked, and went on to explain that he’d never even heard of Euphytica. In my field, Euphytica was considered then as an excellent choice for papers on genetic resources. In a sense he was valuing my academic output based on his ‘blinkered’ view of our shared discipline, botany, which is after all a broad church.

10722Springer now has its own in-house genetic resources journal, Genetic Resources and Crop Evolution (I’m a member of the editorial board), but there are others such as Plant Genetic Resources – Characterization and Utilization (published by Cambridge University Press). Nowadays there are more journals to choose from dealing with disciplines like seed physiology, molecular systematics and ecology, among others, in which papers on genetic resources can find a home.

But in the 1970s and 80s and beyond, I’d always thought about the visibility of my research to others working in the same or allied fields. My research would be of little or no interest to researchers beyond genetic resources or plant breeding for example. So choice of journal in which to publish was predicated very much on this basis. Today, with online searches, the world’s voluminous scientific publishing is accessible at the click of a mouse, it’s perhaps less important exactly where you publish.

Back in the day we had to seek out a hard copy of a journal that interested us, or use something like Current Contents (I’m surprised that’s still going, even in hard copy) to check, on a regular basis, what was being published in various journals. And then contact the author for a reprint (before the days of email).

I can remember way back in the mid-1980s when I had to write a review of true potato seed, when you had to pay for a special literature search through the university library. Now everyone can do it themselves—from their own desk. Nowadays you just search for a journal online, or tap in a few key words, and Hey Presto! there’s a list of relevant papers, complete journal contents lists, abstracts, and even full papers if your institute has a subscription to the journal or the article itself is Open Access.

So the dynamics of scientific publishing have changed from the days when I first began. In some respects then scientific publishing has never been easier but then again never more challenging. Not only are scientists publishing more but they are expected to publish more. Sink or swim!

About a year ago, I was ‘invited’ to join ResearchGatea social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. Since then I receive almost daily (if not more frequent) stats about my science publications and who is citing them. It’s obviously quite gratifying to know that many of the papers I published over the decades are still having scientific traction, so-to-speak. And ResearchGate gives me a score indicating how much my papers are being cited (currently 32.10—is this good? I have no idea). There’s obviously no metric that determines the quality of these papers, nor whether they are being cited for good or bad.

In the 1980s there was some discussion of the value of citation indices. I remember reading an interesting article in an internal University of Birmingham newsletter, Teaching News I think it was called, that was distributed to all staff. In this article the author had warned against the indiscriminate use of citation indices, pointing out that an excellent piece of scholarship on depopulation in rural Wales would receive a much lower citation than say a lower quality paper on the rise of fascism, simply because the former represented a much narrower field of academic pursuit.

Today there are so many more metrics, journal impact factors and the like that are taken into account to assess the quality of science. And for many young researchers these metrics play an important role—for good or bad—for the progression of their careers. Frankly, I don’t understand all of these, and I’m glad I didn’t have to worry about them when I was a young researcher.

David_Colquhoun

Prof. David Colquhoun, FRS

And there are many pitfalls. I came across this interesting article on the blog of Professor David Colquhoun, FRS (formerly professor of pharmacology at University College London) about the use (and misuse) of metrics to assess research performance. There was one very interesting comment that I think sums up many of the concerns about the indiscriminate use of publication metrics:

. . . in six of the ten years leading up to the 1991 Nobel prize, Bert Sakmann failed to meet the metrics-based publication target set by Imperial College London, and these failures included the years in which the original single channel paper was published and also the year, 1985, when he published a paper that was subsequently named as a classic in the field. In two of these ten years he had no publications whatsoever.

Application of metrics in the way that it’s been done at Imperial and also at Queen Mary College London, would result in firing of the most original minds.

We seem obsessed by metrics. And whenever there is a request for publication metrics for whatever purpose, there are always perverse incentives and opportunities to game the system, as I discovered to IRRI’s cost during the CGIAR annual performance exercise in the late ‘Noughties’. And when the submitted data are scrutinized by someone who really does not understand the nature of scientific publishing, then you’re on a slippery slope to accepting scientific mediocrity.

You are welcome to comment on this post . . .

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.