The many reasons scientists are not Republicans – @salon #science

REQUIRED READING!! This Salon piece by Sean McElwee and Philip Cohen is EVERYTHING – about why scientists and Republicans are so at odds … or more that the Republicans are at war with science. We at CauseScience post often about the many times Republican politicians say or do things that are anti-science, and this article highlights the reasons why. My three favorite points below:

Research placing shrimp on treadmills was lampooned by Republicans, but it is part of important research on how marine organisms react to ecosystem changes, which has important implications for food safety. But in other cases, there are less benign motivations for cutting research spending. For instance, big fossil fuel donors have an interest in the government doesn’t take action on climate change. The GOP has tried to slash the NASA budget to prevent it from researching climate change. ExxonMobil has continued to fund climate denial, even after promising not to and after evidence surfaced that it has known about the existence of global warming for nearly four decades.

The explanation is rather simple: Scientists are more broadly in line ideologically with the Democratic Party. But there are two other factors that are accelerating the trend. First, the increasing extremism of the Republican Party, and its fealty to the donor class has led it to embrace positions outside the mainstream. Second, both the GOP base and legislators take an increasingly antagonistic view of science and scientists. Their work to delegitimize science raises deep concerns about the ability of academics to influence important public debates.

Francisco Azuaje – not all scientific talks are good. #science needs to #keepitreal

Check out this post written by Francisco Azuaje for United Academics Magazine about the state of scientific talks. I think Azuaje brings up a lot of important points (More on Azuaje here). In my opinion there are still plenty of average, or below average talks… so perhaps we should keep it real more often. But do it in a sincere and supportive manner. I also love that the title of the post is a play on the Peter, Paul, and Mary song title 😉

Edward O. Wilson, one of the world’s most admired scientists, advised young researchers that “the greatest proportion of moral decisions you will be required to make is in your relationships with other scientists”1.  And indeed this is a vital challenge, not only because science is above all a social networking endeavour, but also because an awareness of this reality may regrettably lead us to over-emphasize the importance of looking or sounding good to others.

And perhaps it is such an anxiety to find a cosy place in the nest of consensus among “peers” that is creating so much delusion.

We need to re-discover average, good and could-be-better. We can do it sincerely, kindly and with rational purpose. Only this way we will be able to spot the truly great.

Springer Publishing retracts 64 papers due to fake peer review – Investigation please… #science

The Washington Post reports on the announcement by Springer Publishing that it is retracting 64 papers due to problems with the peer-review of the papers. Namely, that the peer reviewers were fake, or made up, or the authors’ themselves.

In the latest episode of the fake peer review phenomenon, one of the world’s largest academic publishers, Springer, has retracted 64 articles from 10 of its journals after discovering that their reviews were linked to fake e-mail addresses.

The article includes some terrific commentary from Ivan Oransky of Retraction Watch blog. Hopefully these mass retractions will make publishers pay more attention to their peer-review systems… which should be a priority for any academic/scientific publishing company. And can we see a list of who faked the emails? Investigate whether the authors’ were involved and punish them? Because in the meantime, scientists and science as a whole are being dragged through the mud in full public view.

Reality Check – Academia still has a easy to find gender bias – @US_Conversation

Let’s face it: gender bias in academia is for real

Cynthia Leifer, Cornell University; Hadas Kress-Gazit, Cornell University; Kim Weeden, Cornell University; Marjolein C H van der Meulen, Cornell University; Paulette Clancy, Cornell University, and Sharon Sassler, Cornell University

Cornell Professor Sara Pritchard recently made the argument in The Conversation that female professors should receive bonus points on their student evaluations because of the severe negative bias students have toward their female professors.

Commentators on FOX News attempted to discredit her argument as “insane,” ridiculed the idea that gender plays a role in evaluations and repeatedly mentioned a lack of data to support her claims. But the reality is women faculty are at a disadvantage.

Unfortunately, as we well know, for many women in science, technology, engineering and mathematics (STEM), the path to academia ends long before they obtain a faculty position and are the “lucky” recipient of biased student evaluations.

We represent the success stories – women with careers at Ivy League universities. And yes, while we agree that there are more women in STEM fields today than ever before, bias still affects women in STEM, and not just in student evaluations.

Letters of recommendation and teaching evaluations

It starts right from the hiring process.

In the first stage of the hiring process, a candidate for an academic position must be selected from a pool of hundreds to give a job talk and on-site interview.

The decision of who to invite for a job talk is based on materials about the candidate including CVs, letters of recommendation from prominent figures in the field, samples of research, “buzz” about who’s a rising star and teaching evaluations.

A large body of research shows that many of these materials, and how they are evaluated by search committees, reflect bias in favor of male candidates.

Letters of recommendation, for example, tend to have a very different character for women than for men, and their tone and word choice can affect the impression that the hiring committee forms about candidates.

For example a 2008 study of 886 letters of recommendations for faculty positions in chemistry showed that these letters tended to include descriptors of ability for male applicants, such as “standout,” but refer to the work ethic of the women, rather than their ability, by using words such as “grindstone.”

A similar study showed that female, but not male, students applying for a research grant had letters of recommendation emphasizing the wrong skills, such as the applicants’ ability to care for an elderly parent or to balance the demands of parenting and research.

Furthermore, a 2009 analysis of 194 applicants to research faculty positions in psychology found that letters of recommendation for women used more “communal” adjectives (like helpful, kind, warm and tactful), and letters of recommendation for men used more decisive adjectives (like confident, ambitious, daring and independent), even after statistically controlling for different measures of performance.

Perhaps not surprisingly, a follow-up experiment in the same paper found that these subtle differences in the language can result in female candidates being rated as less hireable than men.

Unfortunately, even when the same language is used to describe candidates or when the key objective criteria of productivity are used, evaluators rated female candidates lower than male candidates.

Teaching evaluations, as our colleague already pointed out, are also known to be biased.

Historian Benjamin Schmidt’s recent text analysis of 14 million rankings on the website ratemyprofessor.com showed substantial differences in the words students used to describe men and women faculty in the same field: men were more likely to be described as “knowledgeable” and “brilliant,” women as “bossy” or, if they were lucky, “helpful.”

If a female candidate makes it through the “on paper” process and is invited for an interview, the bias does not end.

What makes a ‘fit’?

Once a field of candidates is narrowed down from hundreds to a handful, very little distinguishes the top candidates, male or female. Final decisions often come down to intangible qualities and “fit.”

Although “fit” can mean many things to many people, it boils down to guesses about future trajectories, judgments about which hole in a department’s research profile or curriculum is most important to fill, and assessments about whether a person is going to be a colleague who contributes to mentoring, departmental service, and congeniality.

Research in social psychology and management shows that women are seen as competent or likable, but not both. The very traits that make them competent and successful (eg, being strong leaders) violate gender stereotypes about how women are “supposed to” act. Conversely, likable women are often perceived as being less likely to succeed in stereotypically male careers.

Despite all this information, FOX News isn’t alone in its view that women candidates for academic positions are not at a disadvantage.

In fact, one of the commentators in that segment cited a study from other researchers at Cornell that concluded the employment prospects for women seeking faculty positions in STEM disciplines have never been better.

The authors of that study go so far as to blame women’s underrepresentation in the sciences on “self-handicapping and opting out” of the hiring process.

Women doing better, but not better than men

The fact is at the current rate of increase in women faculty in tenure-track positions in STEM fields, it may be 2050 before women reach parity in hiring and, worse, 2115 before women constitute 50% of STEM faculty of all ranks.

This is supported by faculty data at Cornell itself. Between 2010 and 2014, there was only a modest 3%-4% increase in women tenure-line STEM faculty.

In contrast to these data, the study cited by FOX News argued women are preferred to men for tenure-track STEM academic positions. The authors of that study used a research method common in social sciences in which true randomized experiments are impossible to carry out in real-life contexts called an audit study.

In an audit study, people who make the relevant decisions, such as faculty or human resource managers, are sent information about two or more fake applicants for a position. The information is equivalent, except for a hint about the question of interest: for example, one CV may have a male name at the top, the other CV a female name.

Although the audit study design can be very useful, in the case of STEM faculty hiring it oversimplifies the complex hiring process, which typically involves many people, many stages and many pieces of information.

The authors sent out equivalent descriptions of “powerhouse” hypothetical male or female candidates applying for a hypothetical faculty opening to real professors. Among the respondents, more said that they would hire the woman than the man. However, the study in question “controlled for,” and thus eliminated, many of the sources of bias, including letters of recommendation and teaching evaluations that disadvantage women in the hiring process.

Furthermore, only one-third of faculty who were sent packets responded. Thus, the audit study captured only some of the voices that actually make hiring decisions. It is also hard to believe that participants didn’t guess that they were part of an audit study about hiring. Even if they didn’t know the exact research question, they may have been biased by the artificial research context.

The study by our Cornell colleagues has already generated a lot of conversation, on campus and off. The authors have entered this debate, which will undoubtedly continue. That’s how science works.

Contrary to what FOX News and some of our academic colleagues think, the battle against sexism in our fields has not been won, let alone reversed in favor of women. We must continue to educate hiring faculty, and even the society at large, about conscious and unconscious bias.


Paulette Clancy, Hadas Kress-Gazit, Cynthia Leifer, Marjolein van der Meulen, Sharon Sassler, and Kim Weeden are professors at Cornell University. Hadas Kress-Gazit, Cynthia Leifer and Kim Weeden are also Public Voices Fellows at The Op-Ed Project.

The Conversation

Cynthia Leifer is Associate Professor of Immunology, College of Veterinary Medicine at Cornell University.
Hadas Kress-Gazit is Associate Professor of Mechanics and Aerospace Engineering at Cornell University.
Kim Weeden is Professor of Sociology at Cornell University.
Marjolein C H van der Meulen is Professor of Biomedical Engineering at Cornell University.
Paulette Clancy is Professor of Chemical Engineering at Cornell University.
Sharon Sassler is Professor of Policy Analysis and Management at Cornell University.

This article was originally published on The Conversation. Read the original article.

NATURE Commentary – Current #SCIENCE productivity metrics have negative social impact on scientists… and society.

Stephen Harvey highlights the negative effect on scientists LIVES of the current metrics for judging the productivity of scientists. In a correspondence in this week’s Nature, Harvey points out that current metrics favor scientists willing to work crazy hours, that almost always come with a negative social impact.

Any quantitative measure of productivity will reward people who choose to work long hours, build large research teams and minimize their commitments to teaching, review panels and university committees.

The use of such metrics can discourage people from sharing responsibilities and time with their partners or spouses, from investing in and enjoying their children’s lives, and from participating in their local communities. Researchers can feel forced to sacrifice ‘unproductive’ recreational pursuits such as holidays, sport, music, art and reading — activities that other metrics correlate highly with creativity and quality of life (see also J. Overbaugh Nature 477, 27–28; 2011).

We need a more nuanced approach to academic evaluations for hiring, promotion and tenure. The emphasis on quantitative measures of productivity places unfair burdens on scientists and their families, and it discourages some students from pursuing academic careers.

Take a Second and Ask Your Senators to Join the New NIH Caucus – FIGHTING FOR MORE NIH FUNDING! #science

Do you support biomedical research? Encourage your senators to join the new bipartisan NIH Caucus to fight for more NIH funding. GO HERE and take a minute to send an email!! Thanks Society for Neuroscience Advocacy Network for making it so easy!!

Ask Your Senators to Join the New NIH Caucus

Senators Lindsey Graham (R-SC) and Dick Durbin (D-IL) have established a caucus to fight for more NIH funding in the United States Senate.  The NIH Caucus, which includes members of the United States Senate who meet to discuss and pursue common legislative objectives, offers the opportunity to shed light on this important agency and its role in impacting human health. The caucus is working to get new Senators to join and appreciate this important agency.

Please use the form below to email your Senator now and encourage them to join the Caucus today.  It only takes a few minutes and your email will reinforce Sens. Graham and Durbin’s message that NIH funding should be a priority.

It takes less than a minute to send the pre-written email!!!

The WEAK case against double-blind peer review – highlighting why we need it!! #science @NatureNews

NATURE this week feature a correspondence from Thomas E. DeCoursey reasoning against double-blind peer review. In my humble opinion his reasoning is flawed…. not unlike the current peer-review structure. To air out my laundry, I support a completely open or double-blind system for manuscript peer review. All of the peer review models have some flaws, but these two seem infinitely better than the current system where authors are blinded to reviewers but not vice-versa.

DeCoursey makes the somewhat legitimate point that it may be possible for reviewers to ascertain who the authors of a manuscript are based on citations. However, there would always be some element of doubt for the reviewer about who the authors are, and there are many cases where this circumstance would not occur.

Then DeCoursey reasons that reviewers need to know who the authors are in order to judge them on their past work…. or something…. wha???

To function in our increasingly competitive research culture, in which misconduct is on the rise, researchers need to be aware of which labs can be trusted and which have a record of irreproducibility. If a highly regarded lab and one with a questionable reputation each submit reports of similar investigations, a good reviewer would be extra vigilant in assessing the less-reliable lab’s study, even though the same evaluation standards would be upheld for both.

Yes, misconduct is on the rise, but this point seems wrong to me on every level. Reviewer’s should be vigilant of misconduct and scientific quality on every paper, regardless of what lab the paper comes from. Plenty of ‘good’ labs have had to retract papers for many reasons, and labs with a history of misconduct have reformed and redeemed themselves with quality papers. In fact, less vigilant reviewers may be to blame when flawed papers from highly regarded labs make it through the review process with glaring mistakes. Any reviewer that is more or less vigilant reviewing a manuscript based on the author’s names is not an impartial reviewer. PARTIALITY is bad when reviewing papers and grants…  Ethics 101 – Conflict of Interest. For the same reason, most journals won’t allow scientists to review a manuscript from within the same institution.

There is a reason double-blind experimental design is the gold standard for experiments and human clinical trials. Just like a reviewer might think he knows who the authors are, a doctor might think he knows whether a patient is receiving placebo, but neither can ever really be sure. Why wouldn’t we want the same type of controls for peer review?

Double-blind peer review removes this crucial quality-control option, opening the way for mediocre and bad labs to clutter the literature with sub-standard science.

#FacePalm…

Maybe I’m jaded, but good reviewers should be screening out sub-standard science regardless of whether they know what lab a manuscript is from or not. This closing statement makes it sound like DeCoursey thinks only the best labs, with the biggest names, and the highest impact factor publications should be publishing… which I hope is not the case (maybe I read into it too much). If it is the case, then that only argues stronger for a double-blind peer review system.

And in closing, a double-blind peer review system would help avoid racist, sexist, or other embarrassing situations like this one, where a reviewer commented that the two female author’s should add a male author in order to strengthen the manuscript. Double-blind peer review erases sexism, racism, nationalism, institutionalism (?), and other discrimination from the peer review process, which is definitely huge plus!