When it comes to labs, is bigger always better? #science @NatureNews

Chris Woolston has written a nice feature in Nature this week titled, ‘Group dynamics: a lab of their own.’ The article describes many things a PI can consider when picking people for a lab, and how many people to pick. In his words:

Scientists around the world are working to solve the same basic formula: what number and mix of group members makes for the most efficient and productive lab?

As a member of a relatively large lab, where adding people to the group can come with many complaints from current members, I was intrigued by the actual data on productivity associated with adding members…

Bigger is better

Two studies published last year suggest that most labs could produce more papers and make a bigger splash by — perhaps unsurprisingly — bringing more people on board. One of these, a 2015 study of nearly 400 life-sciences PIs in the United Kingdom, found that the productivity of a lab — measured by the number of publications — increased steadily, albeit modestly, with lab size (I. Cook, S. Grange, & A. Eyre-WalkerPeerJhttp://doi.org/bcwf; 2015). In terms of sheer paper production, “it’s best for a lab to be as big as possible”, says co-author Adam Eyre-Walker, a geneticist at the University of Sussex, UK. Notably, the study found no sign that individual members become less productive or less efficient as labs grow. “Adding a team member to a large lab gives you the same return as adding one to a small lab,” Eyre-Walker says.

The second paper, a study of 119 biology laboratories from 1966 to 2000 at the Massachusetts Institute of Technology in Cambridge, found that productivity inched forward when an average-sized lab of ten members added people (A. Conti & C. C. Liu Res. Pol. 44, 16331644; 2015). But this study did detect limits: once lab size reached 25 people — an unusually high number achieved by very few labs — the addition of team members no longer conferred benefit. Further, a lab’s productivity tops out with 13 postdocs, the study found.

It looks as though my PI has created almost the perfect lab size. We are a bit over 25 with rotation students and technicians, but usually hover right around 13 postdocs… creepy. It turns out that their is some method, or at least data backing up, my PI’s madness.

Problems and solutions in #science education and postdoc training @NatureNews

This week Nature has a number of editorials, commentaries, and news features examining graduate education and postdoctoral training. They are all extremely interesting and make TONS of good points!

My favorite, in part because I am living it, is a piece by Jessica Polka (@jessicapolka) and Viviane Callier (@vcallier)- Fellowships are the future. I have to be honest, I could not agree more with this article… even if I wrote it myself. A must READ!!

If postdocs receive greater independence, PIs will lose some control, so they may have to find other resources to conduct their research. But this could be good for science: having postdocs strike out away from the beaten path will bring fresh ideas and approaches to the table. For both of us, getting a fellowship enabled us to cut a path that was separate from the dominant research area in each of our mentors’ labs. The experience of trying to define a new scientific direction has been most useful for us, even as our paths diverge.

Next an editorial – Make the Most of PhDs – highlights the need for graduate education reform, for the good of science and graduates.

The number of people with science doctorates is rapidly increasing, but there are not enough academic jobs for them all. Graduate programmes should be reformed to meet students’ needs.

Last, Julie Gould’s news feature – How to build a better PhD – addresses the problems in scientific graduate education and how to improve it to build better PhDs.

DIY biohackers using CRISPR for complex genome editing not as scary as it sounds #science @NatureNews

Nature has published a news article by Heidi Ledford about the use of CRISPR by recreational biohackers, people who are do-it-yourself biologists for many different reasons (art, fun, culinary). As a genome editing technique, CRIPSR definitely has interesting applications for both good and evil, in terms of biohacking. And the article does a good job exploring these. However, the article implies that CRISPR is feasible for biohackers, only ending with:

But Dan Wright, an environmental lawyer and DIY biohacker in Los Angeles, California, thinks that such a scenario is still beyond the ability of most amateurs. Constructing such a system would surpass the relatively simple tweaks that he and his colleagues are contemplating.

“It’s too difficult,” Wright says. “Just knocking out a gene in one plant is enough of a challenge for a biohacker space at this point.”

As a member of a lab that has used CRISPR for a number of in-depth research applications, I read the whole article with skepticism, and can firmly say that the troubleshooting and time involved in complex uses of CRISPR is definitely beyond most biohackers. At least for the time being. Hopefully the technique will be improved and simplified in the future, for both researchers and biohackers alike.

NATURE Commentary – Current #SCIENCE productivity metrics have negative social impact on scientists… and society.

Stephen Harvey highlights the negative effect on scientists LIVES of the current metrics for judging the productivity of scientists. In a correspondence in this week’s Nature, Harvey points out that current metrics favor scientists willing to work crazy hours, that almost always come with a negative social impact.

Any quantitative measure of productivity will reward people who choose to work long hours, build large research teams and minimize their commitments to teaching, review panels and university committees.

The use of such metrics can discourage people from sharing responsibilities and time with their partners or spouses, from investing in and enjoying their children’s lives, and from participating in their local communities. Researchers can feel forced to sacrifice ‘unproductive’ recreational pursuits such as holidays, sport, music, art and reading — activities that other metrics correlate highly with creativity and quality of life (see also J. Overbaugh Nature 47727282011).

We need a more nuanced approach to academic evaluations for hiring, promotion and tenure. The emphasis on quantitative measures of productivity places unfair burdens on scientists and their families, and it discourages some students from pursuing academic careers.

Correspondence rips apart Nature’s coverage over Tim Hunt’s remarks – in Nature #distractinglysexy

Check out the somewhat scathing correspondence from Rebecca Williams Jackson in Nature over Nature’s publication of remarks not the sexist remarks of Nobel Laureate Tim Hunt.

Whether or not Hunt was joking and whether or not he apologized satisfactorily are beside the point. Neither is it likely that such outdated and seemingly entrenched attitudes can be dispelled by practical attempts to counter gender inequality in science (see Nature 5222552015 and D. HiltonNature 52372015).

Conspicuous by its absence in Nature so far is this: a woman commenting on the harm done by the flippant public denigration of women in science by a prominent scientist who is male.

The pits on comets are likely made by sinkholes ejecting material!! New #science from @ESA_Rosetta!!!

[tweet https://twitter.com/ESA_Rosetta/status/616501848183296000]

ESA’s Rosetta mission is again teaching us about what happens on comets! The latest science was published in Nature this week, and shows that pits on a comets surface are generated by sinkholes that eject jets of material! Super cool!!

[tweet https://twitter.com/ESA_Rosetta/status/616291712152039425]

The WEAK case against double-blind peer review – highlighting why we need it!! #science @NatureNews

NATURE this week feature a correspondence from Thomas E. DeCoursey reasoning against double-blind peer review. In my humble opinion his reasoning is flawed…. not unlike the current peer-review structure. To air out my laundry, I support a completely open or double-blind system for manuscript peer review. All of the peer review models have some flaws, but these two seem infinitely better than the current system where authors are blinded to reviewers but not vice-versa.

DeCoursey makes the somewhat legitimate point that it may be possible for reviewers to ascertain who the authors of a manuscript are based on citations. However, there would always be some element of doubt for the reviewer about who the authors are, and there are many cases where this circumstance would not occur.

Then DeCoursey reasons that reviewers need to know who the authors are in order to judge them on their past work…. or something…. wha???

To function in our increasingly competitive research culture, in which misconduct is on the rise, researchers need to be aware of which labs can be trusted and which have a record of irreproducibility. If a highly regarded lab and one with a questionable reputation each submit reports of similar investigations, a good reviewer would be extra vigilant in assessing the less-reliable lab’s study, even though the same evaluation standards would be upheld for both.

Yes, misconduct is on the rise, but this point seems wrong to me on every level. Reviewer’s should be vigilant of misconduct and scientific quality on every paper, regardless of what lab the paper comes from. Plenty of ‘good’ labs have had to retract papers for many reasons, and labs with a history of misconduct have reformed and redeemed themselves with quality papers. In fact, less vigilant reviewers may be to blame when flawed papers from highly regarded labs make it through the review process with glaring mistakes. Any reviewer that is more or less vigilant reviewing a manuscript based on the author’s names is not an impartial reviewer. PARTIALITY is bad when reviewing papers and grants…  Ethics 101 – Conflict of Interest. For the same reason, most journals won’t allow scientists to review a manuscript from within the same institution.

There is a reason double-blind experimental design is the gold standard for experiments and human clinical trials. Just like a reviewer might think he knows who the authors are, a doctor might think he knows whether a patient is receiving placebo, but neither can ever really be sure. Why wouldn’t we want the same type of controls for peer review?

Double-blind peer review removes this crucial quality-control option, opening the way for mediocre and bad labs to clutter the literature with sub-standard science.

#FacePalm…

Maybe I’m jaded, but good reviewers should be screening out sub-standard science regardless of whether they know what lab a manuscript is from or not. This closing statement makes it sound like DeCoursey thinks only the best labs, with the biggest names, and the highest impact factor publications should be publishing… which I hope is not the case (maybe I read into it too much). If it is the case, then that only argues stronger for a double-blind peer review system.

And in closing, a double-blind peer review system would help avoid racist, sexist, or other embarrassing situations like this one, where a reviewer commented that the two female author’s should add a male author in order to strengthen the manuscript. Double-blind peer review erases sexism, racism, nationalism, institutionalism (?), and other discrimination from the peer review process, which is definitely huge plus!