Francisco Azuaje – not all scientific talks are good. #science needs to #keepitreal

Check out this post written by Francisco Azuaje for United Academics Magazine about the state of scientific talks. I think Azuaje brings up a lot of important points (More on Azuaje here). In my opinion there are still plenty of average, or below average talks… so perhaps we should keep it real more often. But do it in a sincere and supportive manner. I also love that the title of the post is a play on the Peter, Paul, and Mary song title 😉

Edward O. Wilson, one of the world’s most admired scientists, advised young researchers that “the greatest proportion of moral decisions you will be required to make is in your relationships with other scientists”1.  And indeed this is a vital challenge, not only because science is above all a social networking endeavour, but also because an awareness of this reality may regrettably lead us to over-emphasize the importance of looking or sounding good to others.

And perhaps it is such an anxiety to find a cosy place in the nest of consensus among “peers” that is creating so much delusion.

We need to re-discover average, good and could-be-better. We can do it sincerely, kindly and with rational purpose. Only this way we will be able to spot the truly great.

Neuroscience students and postdocs! Applications open for 2016 Gordon Seminar on Neurobiology #HongKong

GRS

Are you a student or postdoc in Neuroscience? Applications are now open for the 2016 Gordon Research Seminar on Molecular and Cellular Neurobiology – Disruptive New Technologies in Studying Neural Development, Plasticity, and Diseases. This Seminar is associated with the Gordon Research Conference on Molecular and Cellular Neurobiology, but is tailored for students and postdocs. Both meetings are in Hong Kong, and feature terrific speakers. Check out the links for more info, registration info, and applications.

GRC.png

#ASAPbio is currently discussing the future of #science publication! #scicomm #starstuddedcast

Just in case you weren’t aware, ASAPbio is currently underway and is likely going to influence the future of science publication!!

Accelerating Science and Publication in Biology (ASAPbio) will be an interactive meeting to discuss the use of preprints in biology held on February 16-17, 2016. The meeting will be streamed online, and we welcome participation from all interested parties through this website and on Twitter (#ASAPbio).

For background on the issues facing science publication, especially in biomedical science and biology, check out this primer from Nature last week (Does it take too long to publish research?). We here at CauseScience think that the answer to that title is a resounding YES!! One option that ASAPbio is considering are preprints – commonly used in other science fields. Nature this week featured another article related to ASAPbio about preprints (Biologists urged to hug a preprint).

For up to date info on the conference, check out the twitter hashtag #ASAPbio, which thus far has included tweets from well-known scientists, and fun pictures of former NIH directors and Nobel Laureates!! Or just visit the ASAPbio website!!

Definitely exciting to see people discussing the problems of science publication, but more importantly, discussing potential solutions!!

What midlife crisis? New study shows happiness increases into midlife! #science

galambos

A new study published in Developmental Psychology suggests that the idea of midlife crisis may be a myth. The research shows that happiness from 20s-40s actually increases when you follow individuals over time in a longitudinal manner. Check out a news summary here. a video summary and interview here, and a fun clip from the Today Show here!! How does that compare and fit in with past research suggesting happiness declines at midlife?

Lead researcher and psychology professor Nancy Galambos says she found the opposite – that people in her study were happier in their early 40s than when they were in their late teens and early 20s.

“I think it’s because life is more difficult for younger people than for people in middle age,” Galambos explains.

She says some young adults are depressed, have trouble finding work and sorting out their lives.

“There’s a lot of uncertainty. But by middle age, a lot of people have worked that out and are quite satisfied through the earliest child-bearing years.”

Galambos says most studies looked at a groups of people of various ages. She says the U of A study surveyed the same people – 1,500 of them – over many years, and is more reliable.

Congrats and shoutout to lead researcher Dr. Nancy Galambos for the nice media attention – she also happens to be my Aunt!!! My own experience couldn’t agree more with this research – aside from being an underpaid research, my happiness definitely increased as I aged through my 20s.

Springer Publishing retracts 64 papers due to fake peer review – Investigation please… #science

The Washington Post reports on the announcement by Springer Publishing that it is retracting 64 papers due to problems with the peer-review of the papers. Namely, that the peer reviewers were fake, or made up, or the authors’ themselves.

In the latest episode of the fake peer review phenomenon, one of the world’s largest academic publishers, Springer, has retracted 64 articles from 10 of its journals after discovering that their reviews were linked to fake e-mail addresses.

The article includes some terrific commentary from Ivan Oransky of Retraction Watch blog. Hopefully these mass retractions will make publishers pay more attention to their peer-review systems… which should be a priority for any academic/scientific publishing company. And can we see a list of who faked the emails? Investigate whether the authors’ were involved and punish them? Because in the meantime, scientists and science as a whole are being dragged through the mud in full public view.

Reality Check – Academia still has a easy to find gender bias – @US_Conversation

Let’s face it: gender bias in academia is for real

Cynthia Leifer, Cornell University; Hadas Kress-Gazit, Cornell University; Kim Weeden, Cornell University; Marjolein C H van der Meulen, Cornell University; Paulette Clancy, Cornell University, and Sharon Sassler, Cornell University

Cornell Professor Sara Pritchard recently made the argument in The Conversation that female professors should receive bonus points on their student evaluations because of the severe negative bias students have toward their female professors.

Commentators on FOX News attempted to discredit her argument as “insane,” ridiculed the idea that gender plays a role in evaluations and repeatedly mentioned a lack of data to support her claims. But the reality is women faculty are at a disadvantage.

Unfortunately, as we well know, for many women in science, technology, engineering and mathematics (STEM), the path to academia ends long before they obtain a faculty position and are the “lucky” recipient of biased student evaluations.

We represent the success stories – women with careers at Ivy League universities. And yes, while we agree that there are more women in STEM fields today than ever before, bias still affects women in STEM, and not just in student evaluations.

Letters of recommendation and teaching evaluations

It starts right from the hiring process.

In the first stage of the hiring process, a candidate for an academic position must be selected from a pool of hundreds to give a job talk and on-site interview.

The decision of who to invite for a job talk is based on materials about the candidate including CVs, letters of recommendation from prominent figures in the field, samples of research, “buzz” about who’s a rising star and teaching evaluations.

A large body of research shows that many of these materials, and how they are evaluated by search committees, reflect bias in favor of male candidates.

Letters of recommendation, for example, tend to have a very different character for women than for men, and their tone and word choice can affect the impression that the hiring committee forms about candidates.

For example a 2008 study of 886 letters of recommendations for faculty positions in chemistry showed that these letters tended to include descriptors of ability for male applicants, such as “standout,” but refer to the work ethic of the women, rather than their ability, by using words such as “grindstone.”

A similar study showed that female, but not male, students applying for a research grant had letters of recommendation emphasizing the wrong skills, such as the applicants’ ability to care for an elderly parent or to balance the demands of parenting and research.

Furthermore, a 2009 analysis of 194 applicants to research faculty positions in psychology found that letters of recommendation for women used more “communal” adjectives (like helpful, kind, warm and tactful), and letters of recommendation for men used more decisive adjectives (like confident, ambitious, daring and independent), even after statistically controlling for different measures of performance.

Perhaps not surprisingly, a follow-up experiment in the same paper found that these subtle differences in the language can result in female candidates being rated as less hireable than men.

Unfortunately, even when the same language is used to describe candidates or when the key objective criteria of productivity are used, evaluators rated female candidates lower than male candidates.

Teaching evaluations, as our colleague already pointed out, are also known to be biased.

Historian Benjamin Schmidt’s recent text analysis of 14 million rankings on the website ratemyprofessor.com showed substantial differences in the words students used to describe men and women faculty in the same field: men were more likely to be described as “knowledgeable” and “brilliant,” women as “bossy” or, if they were lucky, “helpful.”

If a female candidate makes it through the “on paper” process and is invited for an interview, the bias does not end.

What makes a ‘fit’?

Once a field of candidates is narrowed down from hundreds to a handful, very little distinguishes the top candidates, male or female. Final decisions often come down to intangible qualities and “fit.”

Although “fit” can mean many things to many people, it boils down to guesses about future trajectories, judgments about which hole in a department’s research profile or curriculum is most important to fill, and assessments about whether a person is going to be a colleague who contributes to mentoring, departmental service, and congeniality.

Research in social psychology and management shows that women are seen as competent or likable, but not both. The very traits that make them competent and successful (eg, being strong leaders) violate gender stereotypes about how women are “supposed to” act. Conversely, likable women are often perceived as being less likely to succeed in stereotypically male careers.

Despite all this information, FOX News isn’t alone in its view that women candidates for academic positions are not at a disadvantage.

In fact, one of the commentators in that segment cited a study from other researchers at Cornell that concluded the employment prospects for women seeking faculty positions in STEM disciplines have never been better.

The authors of that study go so far as to blame women’s underrepresentation in the sciences on “self-handicapping and opting out” of the hiring process.

Women doing better, but not better than men

The fact is at the current rate of increase in women faculty in tenure-track positions in STEM fields, it may be 2050 before women reach parity in hiring and, worse, 2115 before women constitute 50% of STEM faculty of all ranks.

This is supported by faculty data at Cornell itself. Between 2010 and 2014, there was only a modest 3%-4% increase in women tenure-line STEM faculty.

In contrast to these data, the study cited by FOX News argued women are preferred to men for tenure-track STEM academic positions. The authors of that study used a research method common in social sciences in which true randomized experiments are impossible to carry out in real-life contexts called an audit study.

In an audit study, people who make the relevant decisions, such as faculty or human resource managers, are sent information about two or more fake applicants for a position. The information is equivalent, except for a hint about the question of interest: for example, one CV may have a male name at the top, the other CV a female name.

Although the audit study design can be very useful, in the case of STEM faculty hiring it oversimplifies the complex hiring process, which typically involves many people, many stages and many pieces of information.

The authors sent out equivalent descriptions of “powerhouse” hypothetical male or female candidates applying for a hypothetical faculty opening to real professors. Among the respondents, more said that they would hire the woman than the man. However, the study in question “controlled for,” and thus eliminated, many of the sources of bias, including letters of recommendation and teaching evaluations that disadvantage women in the hiring process.

Furthermore, only one-third of faculty who were sent packets responded. Thus, the audit study captured only some of the voices that actually make hiring decisions. It is also hard to believe that participants didn’t guess that they were part of an audit study about hiring. Even if they didn’t know the exact research question, they may have been biased by the artificial research context.

The study by our Cornell colleagues has already generated a lot of conversation, on campus and off. The authors have entered this debate, which will undoubtedly continue. That’s how science works.

Contrary to what FOX News and some of our academic colleagues think, the battle against sexism in our fields has not been won, let alone reversed in favor of women. We must continue to educate hiring faculty, and even the society at large, about conscious and unconscious bias.


Paulette Clancy, Hadas Kress-Gazit, Cynthia Leifer, Marjolein van der Meulen, Sharon Sassler, and Kim Weeden are professors at Cornell University. Hadas Kress-Gazit, Cynthia Leifer and Kim Weeden are also Public Voices Fellows at The Op-Ed Project.

The Conversation

Cynthia Leifer is Associate Professor of Immunology, College of Veterinary Medicine at Cornell University.
Hadas Kress-Gazit is Associate Professor of Mechanics and Aerospace Engineering at Cornell University.
Kim Weeden is Professor of Sociology at Cornell University.
Marjolein C H van der Meulen is Professor of Biomedical Engineering at Cornell University.
Paulette Clancy is Professor of Chemical Engineering at Cornell University.
Sharon Sassler is Professor of Policy Analysis and Management at Cornell University.

This article was originally published on The Conversation. Read the original article.

#Science Quotable: Marcia McNutt – Training science communicators starts with a poster! #SciComm

Scientists frequently lament the scarcity of effective scientific communicators—those who can explain complex concepts to the public, present scientifically sound alternatives to policy-makers, and make cogent arguments for the value of science to society. A few stellar programs are designed to select and train elite articulators, but some simple steps can improve the communication skills of all scientists. Most researchers learn how to talk about science at meetings. If scientists cannot explain their work clearly and succinctly to their peers, it is highly unlikely that they can explain it effectively to nonspecialists.

Training the next generation of scientists to communicate well should be a priority.

– Marica McNutt, Editor-in-Chief Science Journals – Quoted from: “It Starts With A Poster

How and why do we need to judge research? Derek Smith explains @ConversationUK

Explainer: how and why is research assessed?

By Derek R. Smith, University of Newcastle

Governments and taxpayers deserve to know that their money is being spent on something worthwhile to society. Individuals and groups who are making the greatest contribution to science and to the community deserve to be recognised. For these reasons, all research has to be assessed.

Judging the importance of research is often done by looking at the number of citations a piece of research receives after it has been published.

Let’s say Researcher A figures out something important (such as how to cure a disease). He or she then publishes this information in a scientific journal, which Researcher B reads. Researcher B then does their own experiments and writes up the results in a scientific journal, which refers to the original work of Researcher A. Researcher B has now cited Researcher A.

Thousands of experiments are conducted around the world each year, but not all of the results are useful. In fact, a lot of scientific research that governments pay for is often ignored after it’s published. For example, of the 38 million scientific articles published between 1900 and 2005, half were not cited at all.

To ensure the research they are paying for is of use, governments need a way to decide which researchers and topics they should continue to support. Any system should be fair and, ideally, all researchers should be scored using the same measure.

This is why the field of bibliometrics has become so important in recent years. Bibliometric analysis helps governments to number and rank researchers, making them easier to compare.

Let’s say the disease that Researcher A studies is pretty common, such as cancer, which means that many people are looking at ways to cure it. In the mix now there would be Researchers C, D and E, all publishing their own work on cancer. Governments take notice if, for example, ten people cite the work of Researcher A and only two cite the work of Researcher C.

If everyone in the world who works in the same field as Researcher A gets their research cited on average (say) twice each time they publish, then the international citation benchmark for that topic (in bibliometrics) would be two. The work of Researcher A, with his or her citation rate of ten (five times higher than the world average), is now going to get noticed.

Excellence for Research in Australia

Bibliometric analysis and citation benchmarks form a key part of how research is assessed in Australia. The Excellence for Research in Australia (ERA) process evaluates the quality of research being undertaken at Australian universities against national and international benchmarks. It is administered by the Australian Research Council (ARC) and helps the government decide what research is important and what should continue to receive support.

Although these are not the only components assessed in the ERA process, bibliometric data and citation analysis are still a big part of the performance scores that universities and institutions receive.

Many other countries apply formal research assessment systems to universities and have done so for many years. The United Kingdom, for example, operated a process known as the Research Assessment Exercise between 1986 and 2001. This was superseded by the Research Excellence Framework in 2014.

A bibliometrics-based performance model has also been employed in Norway since 2002. This model was first used to influence budget allocations in 2006, based on scientific publications from the previous year.

Although many articles don’t end up getting cited, this doesn’t always mean the research itself didn’t matter. Take, for example, the polio vaccine developed by Albert Sabin last century, which saves over 300,000 lives around the world each year.

Sabin and others published the main findings in 1960 in what has now become one of the most important scientific articles of all time. By the late 1980s, however, Sabin’s article had not even been cited 100 times.

On the other hand, we have Oliver Lowry, who in 1951 published an article describing a new method for measuring the amount of protein in solutions,. This has become the most highly cited article of all time (over 300,000 citations and counting). Even Lowry was surprised by its “success”, pointing out that he wasn’t really a genius and that this study was by no means his best work.

The history of research assessment

While some may regard the assessment of research as a modern phenomenon inspired by a new generation of faceless bean-counters, the concept has been around for centuries.

Sir Francis Galton, a celebrated geneticist and statistician, was probably the first well-known person to examine the performance of individual scientists, publishing a landmark book, English Men of Science, in the 1870s.

Galton’s work evidently inspired others, with an American book, American Men of Science, appearing in the early 1900s.

Productivity rates for scientists and academics (precursors to today’s performance benchmarks and KPIs) have also existed in one form or another for many years. One of the first performance “benchmarks” appeared in a 1940s book, The Academic Man, which described the output of American academics.

This book is probably most famous for coining the phrase “publish or perish” – the belief that an academic’s fate is doomed if they don’t get their research published. It’s a fate that bibliometric analysis and other citation benchmarks now reinforce.

The Conversation

This article was originally published on The Conversation.
Read the original article.

This years films and Academy Awards are full of Science!! Let’s not stop there!! #ScienceOscars #ElsevierOscars #IfScientistsWere

[tweet https://twitter.com/pinar_gurel/status/568126209311285248]

The Academy Awards (aka The Oscars) are this Sunday!! This year’s Oscar nominations include a plethora of films that are science themed – “The Imitation Game,” “The Theory of Everything,” and “Still Alice” showcase scientists and research!! In light of the Oscars, and to have some science fun, we are starting another science-based twitter hashtag (like previous #IfScientistsWere)!

#ScienceOscars – making movie titles, movie tag lines, and other Oscar nominations science-related (our science take on the hilarious #MakeAFilmUncomfortable and #ReplaceAMovieTitleWithGoat)! Feel free to tweet using the hashtag #ScienceOscars, write your idea in the comments below, or write it on our Facebook wall!! For ideas, a good starting place is this list of all nominees for Best Picture!

[tweet https://twitter.com/CauseScience1/status/568446352537063424] [tweet https://twitter.com/CauseScience1/status/568433938567446528] [tweet https://twitter.com/CauseScience1/status/568135916646227968]

For those people who might want to have fun and also promote open and equal access to science, feel free to use #ElsevierOscars (An Oscars themed #ElsevierValentines).

[tweet https://twitter.com/CauseScience1/status/568479973373313026] [tweet https://twitter.com/CauseScience1/status/568479760789213184]

Working on a PhD? Need a good laugh? Check out #FailAPhDInThreeWords

If you haven’t already, I recommend checking out #FailAPhDInThreeWords! Not only are many of the tweets hilarious, they also provide an interesting commentary on PhD’s and science. Some of the clear trends that emerge for failing a PhD include referencing God/religion, using inappropriate citations (wikipedia, twitter, etc), formatting issues (margins, fonts, typos), plagiarizing, photoshopping or going against basic tenets of science (n of 1, correlation vs causation). Below are some of my favorites!

[tweet https://twitter.com/RDscience/status/566270916032999424] [tweet https://twitter.com/guyren58/status/568083948074618880] [tweet https://twitter.com/AboudDandachi/status/568088995739111424] [tweet https://twitter.com/liberapertus/status/566277423504048129] [tweet https://twitter.com/superhelical/status/567762876539351040] [tweet https://twitter.com/PhuzzieSlippers/status/567778280565379073] [tweet https://twitter.com/jl_crim/status/567804362790289409] [tweet https://twitter.com/JohnNCoupland/status/567828895963095041] [tweet https://twitter.com/CauseScience1/status/568429584745701376] [tweet https://twitter.com/Kevlar007/status/568407347930001408]