The Evolution of Scientific Impact

The evolution of scientific impactIn science, much significance is placed on peer-reviewed publication, and for good reason. Peer review, in principle, guarantees a minimum level of confidence in the validity of the research, allowing future work to build upon it. Typically, a paper (the current accepted unit of scientific knowledge) is vetted by independent colleagues who have the expertise to evaluate both the correctness of the methods and perhaps the importance of the work. If the paper passes the peer-review bar of a journal, it is published.

Measuring scientific impact

For many years, publications in peer-reviewed journals have been the most important measurement of someone’s scientific worth. The more publications, the better. As journals proliferated, however, it became clear that not all journals were created equal. Some had higher standards of peer-review, some placed greater importance on perceived significance of the work. The “impact factor” was thus born out of a need to evaluate the quality of the journals themselves. Now it didn’t just matter how many publications you had, it also mattered where.

But, as many argue, the impact factor is flawed. Calculated as the average number of citations per “eligible” article over a specific time period, it is highly inaccurate given that the actual distribution of citations is heavily skewed (an editorial in Nature by Philip Campbell stated that only 25% of articles account for 89% of the citations).  Journals can also game the system by adopting selective editorial policies to publish articles that are more likely to be cited, such as review articles. At the end of the day, the impact factor is not a very good proxy for the impact of an individual article, and to focus on it may be doing science – and scientists – a disservice.

In fact, any journal-level metric will be inadequate at capturing the significance of individual papers. While few dispute the possibility that high journal impact factors may elevate some undeserving papers while low impact factors may unfairly punish perfectly valuable ones, many still feel that the impact factor – or more generally, the journal name itself – serves as a useful, general quality-control filter. Arguments for this view typically stem from two things: fear of “information overload”, and fear of risk. With so much literature out there, how will I know what is good to read? If this is how it’s been done, why should I risk my career or invest time in trying something new?

What is clear to me is this – science and society are much richer and more interconnected now than at any time in history. There are many more people contributing to science in many more ways now than ever before. Science is becoming more broad (we know about more things) and more deep (we know more about these things). At the same time, print publishing is fading, content is exploding, and technology makes it possible to present, share, and analyze information faster and more powerfully.

For these reasons, I believe (as many others do) that the traditional model of peer-reviewed journals should and will necessarily change significantly over the next decade or so.

_____________________________________________________________________________________________
Shirley Wu completed her graduate studies this summer and will soon be entering the real world as a scientific content curator at 23andMe. Since finishing, she’s wiled away the days learning about real world things like mortgages; catching up on reading, blogging, and cooking; and reliving her graduate experiences via the 3rd and 4th books in the Ph.D. comics series. She likes long walks on the beach, openness, and imagining a future where science is shared, publishing is dynamic, and contributions are recognized.  You can find the entire post as well as her other writings on her blog.

.

.

4 comments so far. Join The Discussion

  1. chemist99

    wrote on August 19, 2009 at 8:54 pm

    The impact factor has come to dominate the journal landscape. It is helpful as one measure of use but as Shirley points out there are lots of problems with it. Journals publishing in fields with lots of practitioners (e.g., cell biology, molecular genetics) will have high impact factors while those publishing in smaller fields (e.g., pharmacology) will have lower impact factors. This is independent of the quality of what the journals actually publish. Also, journals do indeed game the system – check out the percentage of reviews that a given journal publishes as a percentage of its total number or articles. How many of them are published in the first of the year so they'll have a full year to be cited compared to an article published in December? I recently heard from a young colleague who got a letter from a journal editor about a review article he submitted in which the editor suggested he solicit a well-known colleague to serve as senior author to make the article "higher impact." This was in spite of the fact that the review article was not in the senior colleague's field. That's one of the worst I've heard and borders on just plain unethical.

    Students, postdocs, and young faculty obsess about the impact factor and most can tell you the values for all the journals in their fields. There's nothing wrong with that because we should always try to get as much visibility for our articles as possible. But for some people, it's publish in Science or Nature or don't publish. This is especially dangerous for young faculty who spend years doing huge numbers of experiments to construct the perfect paper only to see it rejected by that "high visibility" journal. They'd have been better off hitting a few singles and doubles while lining up a home run because study sections and tenure committees look at the number of papers as well as their quality as markers of productivity. And there are many other excellent journals with slightly lower impact factors that will get the article widely read.

    As Shirley points out, most journals are available in every research laboratory in the world because of electronic publishing. Most publishers have download statisticsfor all their journals, which are excellent indicators of the value of an article. Many publishers are listing the most downloaded articles on a regular basis, which is especially helpful for articles in smaller fields or in fields where a significant proportion of the practitioners don't publish a lot (e.g., the pharmaceutical or chemical industries). We may be stuck with the impact factor for a while but I agree with Shirley that other measures will be considered to evaluate the quality of publications. Of course, one of the best parameters is what YOU think of the article.

  2. Postdoc Risk-Reward: How Valuable is a Top-Tier Publication? | BenchFly Blog

    wrote on August 2, 2010 at 11:32 am

    […] support Shirley Wu and chemist99 in questioning the relevance of journal impact factors and their relation to why we still publish scientific papers.  While the field is experimenting […]

  3. Five Ways Scientists Waste Time | BenchFly Blog

    wrote on August 16, 2010 at 8:38 pm

    […] think about science.  We’re not saying to quit, but as chemist99 put it in the comments of The Evolution of Scientific Impact, “hit a few singles and doubles while lining up a home […]

  4. H-Index: What It Is and How to Find Yours | BenchFly Blog

    wrote on October 20, 2010 at 1:57 pm

    […] factors support the former.  It’s this frustration that has led many to argue that the evolution of scientific impact will move away from this metric over the coming […]

Leave a comment

will not be published