How do we gauge scientific productivity?

Last week, I was fortunate enough to have lunch with the scientific director of intramural research at NHLBI, Dr. Robert Balaban.  At one point, he asked our group of about 10-15 postdocs and postbacs to raise our hands if we wanted to continue on the academic track… and I was shocked to see that only ONE person in our group raised their hand.

But then again, I wasn’t that shocked…  At CauseScience, we have posted several times on the current crisis facing the biomedical research enterprise, and how difficult it is to pursue a career in science.  There are several flaws in the current system, including too many trained scientists for too few academic positions.  Another flaw that has spawned from this hyper-competitive atmosphere is how we acknowledge the productivity of scientists.  Currently, there is this unrealistic expectation that one must publish in a high impact journal (what does the “impact factor” mean?) in order to obtain a tenure track position- that a high impact publication signifies high quality research that is superior to publications in other journals.  But this method of evaluating research is broken and detrimental to all in science (not to mention other side-effects that stem from this, such as fraud and publishing costs). In order to navigate our way out of the current unsustainable biomedical research system, we must change the way in which we gauge scientific productivity.  Several scientists have come together and signed the Declaration on Research Assessment (DORA) supporting this notion that a journal impact factor should not be the judge of ones scientific contributions.  That being said, how then, do we gauge scientific achievements and productivity?

One idea is to gauge productivity, not by the impact factor of the journal the work is published in, but instead by its actual impact.  Independent of the journal it is published in, is the scientific work novel? Does it contribute to the field? Is it well done? Many agree that these are the questions to ask when determining the value of ones scientific research, but how are these questions converted into a tangible metric to evaluate research?  An idea is to examine how often a finding is cited in relation to the impact factor of the journal it’s published in.  For example, if one publishes a research finding in a low impact factor journal, but this work goes on to be cited numerous times, far more times than suggested by the impact factor of the journal, the actual value/contribution of the work is much higher.  Conversely, if one publishes in a high impact journal, but the finding is not cited often at all, that should also be noted. This way, one measures actual impact. The NHLBI has begun to adopt some new methods to evaluate scientific productivity, and Dr. Balaban discusses these in the Journal of General Physiology.

Dr. David Drubin, at UC Berkeley, also discusses ideas on how to measure scientific productivity without using the impact factor scale.  For example, the NIH has been taking steps to change the format of the CV or “biosketch” in grant applications.  To discourage grant reviewers from focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.

Furthermore, Dr. Sandra Schmid at the University of Texas Southwestern Medical Center, has conducted a search for new faculty positions by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications.

While there is still work to be done to implement these types of metrics for evaluating productivity on a larger scale, it’s refreshing to see that steps are being taken to address this problem and potentially fix it. 

David Drubin summarizes HUGE problem with science for ‘the Conversation’ #impactfactor

Time to discard the metric that decides how science is rated

By David Drubin, University of California, Berkeley

Scientists, like other professionals, need ways to evaluate themselves and their colleagues. These evaluations are necessary for better everyday management: hiring, promotions, awarding grants and so on. One evaluation metric has dominated these decisions, and that is doing more harm than good.

This metric, called the journal impact factor or just impact factor, and released annually, counts the average number of times a particular journal’s articles are cited by other scientists in subsequent publications over a certain period of time. The upshot is that it creates a hierarchy among journals, and scientists vie to get their research published in a journal with a higher impact factor, in the hope of advancing their careers.

The trouble is that impact factor of journals where researchers publish their work is a poor surrogate to measure an individual researcher’s accomplishments. Because the range of citations to articles in a journal is so large, the impact factor of a journal is not really a good predictor of the number of citations to any individual article. The flaws in this metric have been acknowledged widely – it lacks transparency and, most of all, it has unintended effects on how science gets done.

A recent study that attempted to quantify the extent to which publication in high-impact-factor journals correlates with academic career progression highlights just how embedded the impact factor is. While other variables also correlate with the likelihood of getting to the top of the academic ladder, the study shows that impact factors and academic pedigree are rewarded over and above the quality of publications. The study also finds evidence of gender bias against women in career progression and emphasises the urgent need for reform in research assessment.

Judging scientists by their ability to publish in the journals with the highest impact factors means that scientists waste valuable time and are encouraged to hype up their work, or worse, only in an effort to secure a space in these prized journals. They also get no credit for sharing data, software and resources, which are vital to progress in science.

This is why, since its release a year ago, more than 10,000 individuals across the scholarly community have signed the San Francisco Declaration on Research Assessment (DORA), which aims to free science from the obsession with the impact factor. The hope is to promote the use of alternative and better methods of research assessment, which will benefit not just the scientific community but society as a whole.

The DORA signatories originate from across the world, and represent just about all constituencies that have a stake in science’s complex ecosystem – including funders, research institutions, publishers, policymakers, professional organisations, technologists and, of course, individual researchers. DORA is an attempt to turn these expressions of criticism into real reform of research assessment, so that hiring, promotion and funding decisions are conducted rigorously and based on scientific judgements.

We can also take heart from real progress in several areas. One of the most influential organisations that is making positive steps towards improved assessment practices is the US National Institutes of Health. The specific changes that have come into play at the NIH concern the format of the CV or “biosketch” in grant applications. To discourage the grant reviewers focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.

At the other end of the spectrum, it is just as important to find individuals who are adopting new tools and approaches in how they show their own contributions to science. One such example is Steven Pettifer, a computer scientist at University of Manchester, who gathers metrics and indicators, combining citations in scholarly journals with coverage in social media about his individual articles to provide a richer picture of the reach and influence of his work.

Another example, as reported in the journal Science, comes from one of the DORA authors, Sandra Schmid at the University of Texas Southwestern Medical Center. She conducted a search for new faculty positions in the department that she leads by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications. A similar approach was also taken for the selection of the recipients for a prestigious prize recognising graduate student research, the Kaluza Prize.

These examples highlight that reform of research assessment is possible right now by anyone or any organisation with a stake in the progress of science.

One common feature among funding agencies with newer approaches to research assessment is that applicants are often asked to restrict the evidence that supports their application to a limited number of research contributions. This emphasises quality over quantity. With fewer research papers to consider, there is greater chance that the evaluators can focus on the science, rather than the journal in which it is published. It would be encouraging if more of these policies also explicitly considered outputs beyond publications and included resources such as major datasets, resources and software, a move made by the US National Science Foundation in January 2013. After all, the accomplishments of scientists cannot be measured in journal articles alone.

There have been at least two initiatives that focus on metrics and indicators at the article level, from US standards’ agency NISO and UK’s higher education body HEFCE. Although moves towards a major reliance on such metrics and indicators in research assessment are premature, and the notion of an “article impact factor” is fraught with difficulty, with the development of standards, transparency and improved understanding of these metrics, they will become valuable sources of evidence of the reach of individual research outputs, as well as tools to support new ways to navigate the literature.

As more and more examples appear of practices that don’t rely on impact factors and journal names, scientists will realise that they might not be as trapped by a single metric as they think. Reform will help researchers by enabling them to focus on their research and help society by improving the return on the public investment in science.

The Conversation

This article was contributed by the authors of the San Francisco Declaration on Research Assessment: David Drubin (University of California, Berkeley; Molecular Biology of the Cell), Stefano Bertuzzi (American Society for Cell Biology), Michael Marks (Children’s Hospital of Philadelphia; Traffic), Tom Misteli (National Cancer Institute; The Journal of Cell Biology), Mark Patterson (eLife), Bernd Pulverer (EMBO Press), Sandra Schmid (University of Texas Southwestern Medical Center).

This article was originally published on The Conversation.
Read the original article.

Want to be a Principal Investigator?

Now you can predict the chances of whether you can be or not! predictor here. This report is not surprising and definitely follows the reports that academia is truly biasing science by focusing heavily on the number of high profile publications. Sad that gender played a role, but again, not anything surprising.

This is based on a recent publication (Publication metrics and success on the academic job market) that analyzed what it takes to become a Principal Investigator. Report here.

 

“We show that success in academia is predictable. It depends on the number of publications, the impact factor (IF) of the journals in which those papers are published, and the number of papers that receive more citations than average for the journal in which they were published (citations/IF). However, both the scientist’s gender and the rank of their university are also of importance, suggesting that non-publication features play a statistically significant role in the academic hiring process.”