Last week, I was fortunate enough to have lunch with the scientific director of intramural research at NHLBI, Dr. Robert Balaban. At one point, he asked our group of about 10-15 postdocs and postbacs to raise our hands if we wanted to continue on the academic track… and I was shocked to see that only ONE person in our group raised their hand.
But then again, I wasn’t that shocked… At CauseScience, we have posted several times on the current crisis facing the biomedical research enterprise, and how difficult it is to pursue a career in science. There are several flaws in the current system, including too many trained scientists for too few academic positions. Another flaw that has spawned from this hyper-competitive atmosphere is how we acknowledge the productivity of scientists. Currently, there is this unrealistic expectation that one must publish in a high impact journal (what does the “impact factor” mean?) in order to obtain a tenure track position- that a high impact publication signifies high quality research that is superior to publications in other journals. But this method of evaluating research is broken and detrimental to all in science (not to mention other side-effects that stem from this, such as fraud and publishing costs). In order to navigate our way out of the current unsustainable biomedical research system, we must change the way in which we gauge scientific productivity. Several scientists have come together and signed the Declaration on Research Assessment (DORA) supporting this notion that a journal impact factor should not be the judge of ones scientific contributions. That being said, how then, do we gauge scientific achievements and productivity?
One idea is to gauge productivity, not by the impact factor of the journal the work is published in, but instead by its actual impact. Independent of the journal it is published in, is the scientific work novel? Does it contribute to the field? Is it well done? Many agree that these are the questions to ask when determining the value of ones scientific research, but how are these questions converted into a tangible metric to evaluate research? An idea is to examine how often a finding is cited in relation to the impact factor of the journal it’s published in. For example, if one publishes a research finding in a low impact factor journal, but this work goes on to be cited numerous times, far more times than suggested by the impact factor of the journal, the actual value/contribution of the work is much higher. Conversely, if one publishes in a high impact journal, but the finding is not cited often at all, that should also be noted. This way, one measures actual impact. The NHLBI has begun to adopt some new methods to evaluate scientific productivity, and Dr. Balaban discusses these in the Journal of General Physiology.
Dr. David Drubin, at UC Berkeley, also discusses ideas on how to measure scientific productivity without using the impact factor scale. For example, the NIH has been taking steps to change the format of the CV or “biosketch” in grant applications. To discourage grant reviewers from focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.
Furthermore, Dr. Sandra Schmid at the University of Texas Southwestern Medical Center, has conducted a search for new faculty positions by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications.
While there is still work to be done to implement these types of metrics for evaluating productivity on a larger scale, it’s refreshing to see that steps are being taken to address this problem and potentially fix it.