Springer Publishing retracts 64 papers due to fake peer review – Investigation please… #science

The Washington Post reports on the announcement by Springer Publishing that it is retracting 64 papers due to problems with the peer-review of the papers. Namely, that the peer reviewers were fake, or made up, or the authors’ themselves.

In the latest episode of the fake peer review phenomenon, one of the world’s largest academic publishers, Springer, has retracted 64 articles from 10 of its journals after discovering that their reviews were linked to fake e-mail addresses.

The article includes some terrific commentary from Ivan Oransky of Retraction Watch blog. Hopefully these mass retractions will make publishers pay more attention to their peer-review systems… which should be a priority for any academic/scientific publishing company. And can we see a list of who faked the emails? Investigate whether the authors’ were involved and punish them? Because in the meantime, scientists and science as a whole are being dragged through the mud in full public view.

How do we gauge scientific productivity?

Last week, I was fortunate enough to have lunch with the scientific director of intramural research at NHLBI, Dr. Robert Balaban.  At one point, he asked our group of about 10-15 postdocs and postbacs to raise our hands if we wanted to continue on the academic track… and I was shocked to see that only ONE person in our group raised their hand.

But then again, I wasn’t that shocked…  At CauseScience, we have posted several times on the current crisis facing the biomedical research enterprise, and how difficult it is to pursue a career in science.  There are several flaws in the current system, including too many trained scientists for too few academic positions.  Another flaw that has spawned from this hyper-competitive atmosphere is how we acknowledge the productivity of scientists.  Currently, there is this unrealistic expectation that one must publish in a high impact journal (what does the “impact factor” mean?) in order to obtain a tenure track position- that a high impact publication signifies high quality research that is superior to publications in other journals.  But this method of evaluating research is broken and detrimental to all in science (not to mention other side-effects that stem from this, such as fraud and publishing costs). In order to navigate our way out of the current unsustainable biomedical research system, we must change the way in which we gauge scientific productivity.  Several scientists have come together and signed the Declaration on Research Assessment (DORA) supporting this notion that a journal impact factor should not be the judge of ones scientific contributions.  That being said, how then, do we gauge scientific achievements and productivity?

One idea is to gauge productivity, not by the impact factor of the journal the work is published in, but instead by its actual impact.  Independent of the journal it is published in, is the scientific work novel? Does it contribute to the field? Is it well done? Many agree that these are the questions to ask when determining the value of ones scientific research, but how are these questions converted into a tangible metric to evaluate research?  An idea is to examine how often a finding is cited in relation to the impact factor of the journal it’s published in.  For example, if one publishes a research finding in a low impact factor journal, but this work goes on to be cited numerous times, far more times than suggested by the impact factor of the journal, the actual value/contribution of the work is much higher.  Conversely, if one publishes in a high impact journal, but the finding is not cited often at all, that should also be noted. This way, one measures actual impact. The NHLBI has begun to adopt some new methods to evaluate scientific productivity, and Dr. Balaban discusses these in the Journal of General Physiology.

Dr. David Drubin, at UC Berkeley, also discusses ideas on how to measure scientific productivity without using the impact factor scale.  For example, the NIH has been taking steps to change the format of the CV or “biosketch” in grant applications.  To discourage grant reviewers from focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.

Furthermore, Dr. Sandra Schmid at the University of Texas Southwestern Medical Center, has conducted a search for new faculty positions by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications.

While there is still work to be done to implement these types of metrics for evaluating productivity on a larger scale, it’s refreshing to see that steps are being taken to address this problem and potentially fix it. 

Colombian Master’s student faces 8 years in prison for sharing article #DiegoGomez

As I have mentioned in a previous post, one major hurdle for researchers in countries outside the US and Europe is gaining access to published science. And remember that doing effective science requires access to what other scientists have done, which often comes at an exorbitant price.


Today I learned about Diego Gomez, a Colombian masters student in Conservation and Wildlife Management in Costa Rica. According to an article on Electronic Frontier Foundation by Maira Sutton, Diego Gomez is facing 4-8 years in prison for sharing an academic article (possibly just a master’s thesis?) on the internet with a group of other students and researchers. According to the Sutton article,

The author of the paper then filed a lawsuit over the “violation of [his] economic and related rights.”

I am curious who this author is… i mean really? I wouldn’t be surprised if it was a publishing company or institution, but it seems crazy that it would be the author themselves. See a letter from Diego Gomez in english on karisma.org (the website of Fundación Karisma, who are supporting Gomez). Sadly, as you can see below, Gomez and other researchers outside the US and Europe are almost required to break laws to participate in research. Below are some excerpts from Gomez’ letter, bolding is mine.


My name is Diego Gomez and with 26 years old I have defined my great passion in life: the biodiversity conservation.

Above all, I’m disconcerted that this activity I did for academic purposes may be considered a crime, turning me into a “criminal.” Today what the vast majority of the country’s researchers and conservationists are doing, despite being committed to spreading knowledge, is turning us into criminals.

Check out on twitter (mostly in spanish) at #CompartirNoEsDelito


Times are a changin’: Testing out double-blind peer review in scientific publishing #abouttime



Daniel Cressey has written a great article in Nature about a double-blind peer review system for scientific publications. In the current system of peer review (see this post for more info on peer review), most reviewers know the names of the authors, but not vice-versa, which many people believe means reviews are not un-biased. These biases can be conscious or subconscious, against known authors, minorities, women, and others (ie. younger scientists).

But last week an article in Conservation Biology1 revealed that journal would be considering ‘double blind’ peer review — in which neither the reviewer nor the reviewed knows the other’s identity. Double-blind peer review is common in the humanities and social sciences, but very few scientific journals have adopted it.

In addition to Conservation Biology, a few other publishing journals that are trying out double-blind peer review, including two from Nature Publishing Group, Nature Geoscience and Nature Climate Change. While it is too early to know how this system will work, it is definitely exciting to see some change to this system. Definitely check out Cressey’s article! And check out CauseScience’s many posts about peer-review and what is wrong with it.

Problems in Science Part 1: the broken economics of academic publishing

For the introduction and background on this series of posts, see this post.


A great article by Bob Yirka on Phys.org summarizes a recent PNAS paper describing how universities and academic institutions are being scammed (more or less) by publishers when purchasing bundled subscriptions to academic journals (during the writing of this post, Science published a similar review article by John Bohannon).

In looking at the contracts, the researchers found widely different charges to universities for the same bundles. They found for example, that the University of Michigan paid $2.16 million in 2009 for a bundle from Elsevier, while the University of Wisconsin, paid just $1.22 million the same year for the same bundle from the same company. They note that the two universities are similar in the size of their staffs and the number of PhD students, yet one school paid considerably more than the other.

That is crazy right? The publishing companies make the universities sign confidentiality agreements, so they can’t discuss how much they pay with other universities, hence the huge discrepencies. But that only scratches the surface of craziness in academic publishing. As a taxpayer, you fund much of the research that is published in academic journals. Scientists, whose salaries and research likely include funding from tax dollars, usually have to pay academic journals to publish papers (using taxpayer money again). In the editing process, journals utilize a peer-review system. In almost all cases, scientists, most of who are again at least partly funded with tax dollars, do this peer-review FOR FREE. The academic journals then publish the accepted articles that were paid for by taxpayers all along the way. However, many (most?) academic journals require subscriptions to view the published articles. Meaning that scientists and the public have to pay, or be part of an institution that pays, in order to see science that they essentially funded each step of the way. This process, with profit numbers, was described in the Economist:

In 2011 Elsevier, the biggest academic-journal publisher, made a profit of £768m ($1.2 billion) on revenues of £2.1 billion. Such margins (37%, up from 36% in 2010) are possible because the journals’ content is largely provided free by researchers, and the academics who peer-review their papers are usually unpaid volunteers. The journals are then sold to the very universities that provide the free content and labour. For publicly funded research, the result is that the academics and taxpayers who were responsible for its creation have to pay to read it. This is not merely absurd and unjust; it also hampers education and research.

And even though it IS absurd and unjust, there is some noise from the opposing side!! Publishing companies and academic journals disagree that the system is as bad as I have described, and granted, I may have described a worst case scenario (but not uncommon). But, then again, who wouldn’t disagree when you’re company is making those kinds of profits (see above: $1.2 billion!). A recent Nature article does include some arguments in support of publishers and journals. For example, prestigous journals have a higher workload due to higher rates of submission, and that requires more time and workers. Not too mention the claim that these journals have a ‘higher’ caliber of science (a topic of a later post in this series).

However, there is some hope on the horizon of fixing this broken system. In the US, the White House has mandated that taxpayer funded research must be open access… after one year. This means that taxpayer funded research that is published in any journal must be freely available after one year, which is a good start, but a lot of science happens in a year. There is also the problem of whether journals are actually following these rules and making these articles available. Furthermore, this is in the US. Turns out that other countries also do science, and many American scientists want to collaborate with international scientists. Access to scientific articles in other countries can be much more difficult and expensive, which definitely holds back science in these countries and hampers international collaboration.

There is also a trend of increasing numbers of publications in open access journals (journals like PLOS, public library of science, and eLife). These types of journals are becoming more and more popular, and new ones are sprouting up everyday (Society for Neuroscience announced its new open access journal this week, eNeuro). They are freely available to the public, and the costs of publishing in these journals is considerably cheaper.

More work needs to be done to address the crazy economics of academic publishing. Right now it seems like taxpayers are on the losing side for supporting science, with a lot of money going toward publication and not actual science. Not to mention that the public has limited access to this science (See one of the many posts where I can’t link to the full article, only the abstract. Or have to apologize that not everyone will have access to the article). Lastly, it is mind-boggling that universities and research institutions are allowing themselves to be taken for a ride. To me it seems that research centers are the majority of both the supply and demand for academic journals (supporting the scientists, science, publication, peer-reviewers, and also use these publications as a metric for scientific success). Doesn’t that mean they have a lot of negotiating power. Hopefully with the report in PNAS, universities will take a closer look at how they deal with academic publishing, which in turn will hopefully benefit the economics of publicly funded science and in turn the science itself.