Stories of adverse events from vaccines matter – explained @ConversationUS

Stories of vaccine-related harms are influential, even when people don’t believe them

Laura Scherer, University of Missouri-Columbia; Brian Zikmund-Fisher, University of Michigan; Niraj Patel, University of Missouri-Columbia, and Victoria Shaffer, University of Missouri-Columbia

In 2013 a boy who was given the HPV vaccine died almost two months later.

Two quick questions: First, does this worry you? And second, do you believe that the vaccine caused the boy’s death?

This is a real case reported in the Vaccine Adverse Event Reporting System (VAERS). VAERS is monitored by health experts at the Centers for Disease Control and Prevention and Food and Drug Administration to detect very rare or emergent harms that may be caused by vaccines. The vast majority of adverse events reported in VAERS are mild (such as fever), but a few are serious, like death and permanent disabilities. Staff follow up on certain cases to better understand what happened.

A growing number of parents are refusing to vaccinate their children, and one reason they often state is that they do not trust that doctors and government agencies sufficiently research the potential harms of vaccines. Given that, we wanted to find out whether telling people about VAERS and the information it gathers could influence their beliefs about vaccine safety.

Vaccine refusal and the importance of trust

It’s important to stress that just because a case like the one mentioned above is reported to VAERS doesn’t mean that the vaccine caused the problem. That’s because VAERS is an open-access reporting system.

Health care providers are required to report certain adverse events, but they are not the only ones who can contribute to the database. Anyone can make a report in VAERS for any reason. Similarly, anyone can access VAERS reports and data. In fact, advocates both for and against vaccines refer to VAERS data as evidence of either the existence of harms or the rarity of harms.

This open-access feature makes VAERS a potentially rich source of information about possible vaccine-related harms. It also means, however, that the events reported in VAERS often turn out to have nothing to do with a vaccine.

Take for example, the boy who died less than two months after receiving the HPV vaccine. Here’s what the full VAERS report says: “Sudden death. He was perfectly healthy. The vaccination is the only thing I can think of that would have caused this. Everything else in his life was normal, the same.”

The fact that there were no reported problems for almost two months between the vaccine and the child’s death might make you, like us, skeptical that the vaccine was the cause. Yet, it is important that the death was reported so that it can be followed up.

Being transparent about risks is critical to building trust. In fact, that’s part of the reason that VAERS data is available to everyone.

Does VAERS make people trust vaccine safety?

It seems plausible that describing VAERS in depth could build trust. Doing so would demonstrate that every effort is being made to collect information about potential vaccine harms, and that even with such a comprehensive effort very few serious events are reported. Further, transparency would also show that these few serious events are not necessarily caused by the vaccine, and this information is available for anyone to view and evaluate.

We decided to test this idea in a recent internet survey. We surveyed over 1,200 people, who were divided into three groups.

One group received the standard CDC Vaccine Information Statement for the HPV vaccine. We chose the HPV vaccine because this vaccine is particularly underutilized. The second group was given detailed information about VAERS – what it is, what it is for and what it contains – as well as the number of serious adverse event reports received about HPV. To be specific, this group was told that there were seven deaths and 24 permanent disabilities reported for the HPV vaccine in 2013 out of a total of approximately 10 million vaccine doses given that year. A third group received all of that information and then also read the actual adverse event reports in detail. We hoped that reading these reports would show this group that not all of these deaths and permanent disabilities were caused by the vaccine.

We found that telling participants about VAERS, without having them read the actual reports, improved vaccine acceptance only very slightly. Even worse, when participants read the detailed reports, both vaccine acceptance and trust in the CDC’s conclusion that vaccines are safe declined significantly.

What we found next surprised us: The vast majority of our survey respondents, the same ones who were less accepting of vaccines and less trusting of the CDC, said that they believed the vaccine caused few or none of the reported deaths and disabilities. This means that the individual stories of perceived vaccine harms were highly influential, even when people didn’t believe they were true.

We are influenced by information even when we don’t believe it

Think back to your reaction to reading about the tragic death we described earlier. Our data suggest that just learning about this death may have caused you to feel more negatively toward the HPV vaccine, even if you believed that the vaccine did not cause the death.

While we can’t say that everyone reacted to the stories the same way or to the same degree, it seems clear that at least some people didn’t believe that the vaccine caused the reported harms, but they were nonetheless negatively influenced by those reports.

Systems like VAERS are essential for public health, providing an opportunity to learn about and investigate every possible case of potential harm caused by vaccines. But the power and emotion evoked by the stories of VAERS reports may influence us and undermine trust in vaccines, no matter what our rational mind might think.

The Conversation

Laura Scherer, Assistant Professor, Psychology, University of Missouri-Columbia; Brian Zikmund-Fisher, Associate Professor of Health Behavior and Health Education, University of Michigan; Niraj Patel, Graduate Student, University of Missouri-Columbia, and Victoria Shaffer, Associate Professor of Psychology, University of Missouri-Columbia

This article was originally published on The Conversation. Read the original article.

Science communication training gets strategic @ConversationUS #scicomm

Science communication training should be about more than just how to transmit knowledge

John Besley, Michigan State University and Anthony Dudo, University of Texas at Austin

For some scientists, communicating effectively with the public seems to come naturally. Astrophysicist Neil deGrasse Tyson currently has more than five million Twitter followers. Astronomer Carl Sagan enraptured audiences for decades as a ubiquitous cosmic sage on American televisions. And Stephen Jay Gould’s public visibility was such that he voiced an animated version of himself on “The Simpsons.” But, for most scientists, outward-facing communication is not something they’ve typically thought about much… let alone sought to cultivate.

But times change. Leaders in the scientific community are increasingly calling on their scientist colleagues to meaningfully engage with their fellow citizens. The hope is that such interactions can improve the science-society relationship at a time when we are confronting a growing list of high-stakes, high-controversy issues including climate change, synthetic biology and epigenetics.

The gauntlet has been issued, but can scientists meet it?

The answer to that question largely depends on one key group: professional science communication trainers who offer formalized guidance designed to improve scientists’ public communication efforts. There’s a wellspring of science communication programs, among them the Alan Alda Center for Communicating Science, the Center for Public Engagement with Science & Technology at the American Association for the Advancement of Science and the Communication Partnership for Science and the Sea. Programs like these typically provide communication courses of a half-day up to a week or more. Some organizations also employ in-house personnel to train their scientists to communicate.

Given the important role these training programs now play in the public communication of science, we sought to examine their work. Broadly, we were looking for commonalities in their efforts and experiences, and we wanted to spot possible opportunities for their growth. We were especially interested in something we view as being critical to effective public engagement: helping scientists identify and try to achieve specific communication goals.

What trainers focus on

In late 2014, we conducted a set of 24 interviews with science communication trainers from across the United States. Ours is the first published study examining this important community. We found that much of the training they provided focused on helping scientists share their research in clear ways that would increase knowledge.

This is consistent with what scientists have told us in surveys: their main objective in communicating their work is to inform the public about science and correct misinformation.

Sharing knowledge will always be a central component of science communication – knowledge generation is, after all, the main enterprise of science. And relaying knowledge makes up the bulk of the science journalism the public encounters through the media – stories about new discoveries and the latest research.

But there are other reasons scientists might want to communicate with the general public. We call these “nonknowledge objectives” – things like fostering excitement about science, building trust in the scientific community, or reframing how people think about certain issues. These objectives are different from a biologist wanting to share with a listener the details on her research on bird migration, for instance. They’re more about people, and forging relationships.

We’ve found that these sorts of nonknowledge goals have a relatively lower priority for scientists compared to the desire to get information across about their direct scientific work. Not surprisingly, only a few of the trainers we interviewed indicated that, at that time, they were explicitly trying to help scientists achieve these other kinds of nonknowledge objectives.

Nevertheless, the trainers told us they believed many of the scientists they train want to communicate to help raise public support for science in general and because they think their research will help people see the value in specific policy options.

Our work suggests that scientists and the trainers they work with often focus primarily on the successful transmission of science information, leaving those other objectives to fall into place. But there’s a problem with that logic. Decades of science communication research – a research area now commonly referred to as the science of science communication – show that fostering positive views about science requires more than just trying to correct deficits in public knowledge.

Matching the training to the ultimate goal

It may be useful to consider alternatives (or additions) to the character of the current training landscape. The emphasis now is on teaching scientists key journalism skills to help them share information more effectively – by, for instance, distilling jargon-free messages. Training typically places limited emphasis on whether sharing that information will have the desired effect.

Instead, given scientists’ goals, training could help scientists avoid doing things that have little potential for impact or, worse, actually diminish people’s views of science.

Extensive research shows that we tend to trust people we judge to be warm and caring because they seem less likely to want to do us harm. With that in mind, more training could explicitly help scientists avoid doing the types of things that might convey a cold demeanor. For example, no matter how accurate a scientist’s argument may be, if communicated rudely it will likely miss its mark. Worse still, it may generate negative feelings that a recipient could then generalize more broadly to the scientific community.

Related research on what people perceive to be fair or not when it comes to making important decisions could also inform communication training. Studies emphasize the potential strategic value of making sure people feel like they’re being listened to and treated with respect. Imagine, for example, how you’d feel if a doctor didn’t give you a genuine chance to share your personal experiences with an ailment.

Similarly, given what we know about the value of framing, perhaps more training should help scientists find ways to talk about issues that are consistent with the scientists’ work but that are also consistent with the priorities or worldviews of the people with whom they are speaking. For example, given the value that people put on their families’ health, it may make sense to frame climate change in terms of health issues.

Challenges to getting more strategic

There are at least two challenges associated with suggesting a more strategic approach to science communication.

First, it is easier to communicate in ways that come naturally and simply hope for the best.

Second, there is a danger that some people will misconstrue being strategic as being dishonest. On the contrary, effective strategic communication rests on authenticity, just like science. Science communicators should never do things like pretend to be warm, fake listening or frame things in ways they don’t think are appropriate.

The point is that by thinking strategically, we can begin to recognize that our communication choices – whether it’s leaving time after a talk for real discussion, calling those with whom we disagree ugly names or framing every disagreement as a war – have consequences.

It also seems clear that science communicators and communication trainers – who, in our experience, provide outstanding training in key skills – are already focusing on certain tactics that affect things like trust without making the explicit connection. For example, just using accessible language and speaking without jargon might communicate that scientists care enough about those with whom they are speaking to accommodate them. The power of telling stories isn’t just a better way to convey information; it’s a social act with social consequences.

Effective public engagement involves high-quality interactions between people. This means that many of the actual effects are likely to be due to the quality of the relationships between participants, including scientists and nonscientists. Content matters, of course, but not unless a healthy dynamic for information exchange is established.

The science communication training community is already doing great work. Ultimately, as trainers and scientists get more strategic in their science communication, it will help justify the time and resources it takes to communicate effectively. And they can forgo activities that seem unlikely to have an impact.

The Conversation

John Besley, Associate Professor of Advertising and Public Relations, Michigan State University and Anthony Dudo, Assistant Professor of Advertising and Public Relations, University of Texas at Austin

This article was originally published on The Conversation. Read the original article.

Why the Paris climate deal signing ceremony matters! @FR_Conversation

Paris climate deal signing ceremony: what it means and why it matters

Damon Jones, University of Cologne and Bill Hare, Potsdam Institute for Climate Impact Research

The world took a collective sigh of relief in the last days of 2015, when countries came together to adopt the historic Paris agreement on climate change.

The international treaty was a much-needed victory for multilateralism, and surprised many with its more-ambitious-than-expected agreement to pursue efforts to limit global warming to 1.5°C.

The next step in bringing the agreement into effect happens in New York on Friday 22 April, with leaders and dignitaries from more than 150 countries attending a high-level ceremony at the United Nations to officially sign it.

The New York event will be an important barometer of political momentum leading into the implementation phase – one that requires domestic climate policies to be drawn up, as well as further international negotiations.

It comes a week after scientists took a significant step to assist with the process. On April 13 in Nairobi, the Intergovernmental Panel on Climate Change agreed to prepare a special report on the impacts of global warming of 1.5°C above pre-industrial levels. This will provide scientific guidance on the level of ambition and action needed to implement the Paris agreement.

Why the ceremony?

The signing ceremony in New York sets in motion the formal, legal processes required for the Paris Agreement to “enter into force”, so that it can become legally binding under international law.

Although the agreement was adopted on December 12 2015 in Paris, it has not yet entered into force. This will happen automatically 30 days after it has both been ratified by at least 55 countries, and by countries representing at least 55% of global greenhouse gas emissions. Both conditions of this threshold have to be met before the agreement is legally binding.

So, contrary to some concerns after Paris, the world does not have to wait until 2020 for the agreement to enter into force. It could happen as early as this year.

Signing vs ratification

When a country signs the agreement, it is obliged to refrain from acts that would defeat its object and purpose. The next step, ratification, signifies intent to be legally bound by the terms of the treaty.

The decision on timing for ratification by each country will largely be determined by domestic political circumstances and legislative requirements for international agreements.

Those countries that have already completed their domestic processes for international agreements can choose to sign and ratify on the same day in New York.

Who is going to sign and ratify in New York?

It is perhaps no surprise that the countries which are particularly vulnerable to the impacts of climate change and who championed the need for high ambition in Paris will be first out of the gate to ratify in New York.

Thirteen Small Island Developing States (SIDS) from the Caribbean, Indian Ocean and Pacific have signalled their intent to sign and ratify in New York: Barbados, Belize, Fiji, Grenada, Maldives, Marshall Islands, Nauru, Palau, Samoa, Saint Lucia, St Vincent and the Grenadines, the Seychelles and Tuvalu.

While these countries make up about a quarter of the 55 countries needed, they only account for 0.02% of the emissions that count towards the required 55% global emissions total.

Bringing the big emitters on board

China and the United States have recently jointly announced their intentions to sign in New York and to take the necessary domestic steps to formally join the agreement by ratifying it later this year. Given that they make up nearly 40% of the agreed set of global emissions for entry into force, that will go a significant way to meeting the 55% threshold.

We can expect more announcements of intended ratification schedules on 22 April. Canada (1.95%) has signalled its intent to ratify this year and there are early signs for many others. Unfortunately the European Union, long a leader on climate change, seems unlikely to be amongst the first movers due to internal political difficulties, including the intransigence of the Polish government.

The double threshold means that even if all of the SIDS and Least Developed Countries (LDCs) ratified, accounting for more than 75 countries but only around 4% of global emissions, the agreement would not enter into force until countries with a further 51% of global emissions also ratified.

Consequently, many more of the large emitters will need to ratify to ensure that the Paris agreement enters into force. This was a key design feature – it means a small number of major emitters cannot force a binding agreement on the rest of the world, and a large number of smaller countries cannot force a binding agreement on the major emitters.

The 55% threshold was set in order to ensure that it would be hard for a blocking coalition to form – a group of countries whose failure to ratify could ensure that an emissions threshold could not be met in practice. A number much above 60% of global emissions could indeed have led to such a situation.

The countries that appear likely to ratify this year, including China, the USA, Canada, many SIDS and LDCs, members of the Climate Vulnerable Forum along with several Latin American and African countries – around 90 in all – still fall about 5-6% short of the 55% emissions threshold.

It will take one more large emitter, such as the Russian Federation (7.53%), or two such as India (4.10%) and Japan (3.79%) to get the agreement over the line. The intent of these countries is not yet known.

Why is early action important?

The Paris agreement may be ambitious, but it will only be as good as its implementation. That will depend on the political momentum gained in Paris being maintained. Early entry into force for the treaty would be a powerful signal in this direction.

We know from the Climate Action Tracker analyses that the present commitments are far from adequate. If all countries fully implement the national emission reduction targets brought to the climate negotiations last year, we are still on track for temperature increases of around 2.7°C. Worse, we also know that current policies adopted by countries are insufficient to meet these targets and are heading to around 3.6°C of global warming.

With average global annual temperature increase tipping over 1°C above pre-industrial levels for the first time last year, it is clear that action to reduce emissions has never been more urgent.

We are already seeing more evidence this year: increases in the monthly global averages of February and March 2016 far exceeded 1°C, record coral reef bleaching, heatwaves, and unprecedented early melting of the Greenland ice sheet this northern spring.

Early entry into force will unlock the legally binding rights and obligations for parties to the agreement. These go beyond just obligations aimed at delivering emissions reductions through countries’ Nationally Determined Contributions to the critical issues of, for example, adaptation, climate finance, loss and damage, and transparency in reporting on and reviewing action and support.

The events in New York this week symbolise the collective realisation that rapid, transformative action is required to decarbonise the global economy by 2050.

Climate science tells us that action must increase significantly within the next decade if we are to rein in the devastating impacts of climate change, which the most vulnerable countries are already acutely experiencing.

For an up-to-date picture of which countries have ratified the Paris Agreement, see our Ratification Tracker.

The Conversation

Damon Jones, Lecturer, University of Cologne and Bill Hare, Visiting scientist, Potsdam Institute for Climate Impact Research

This article was originally published on The Conversation. Read the original article.

#science attraction Body Worlds reconsidered by Samuel Redman – @ConversationUS

Reconsidering Body Worlds: why do we still flock to exhibits of dead human beings?

Samuel Redman, University of Massachusetts Amherst

When Dr. Gunther von Hagens started using “plastination“ in the 1970s to preserve human bodies, he likely did not anticipate the wild success of the Body Worlds exhibitions that stem from his creation. Body Worlds has since hosted millions of visitors to its exhibits, including six spin-offs. The offshoots include a version on vital organs and another featuring plastinated animal remains. The process replaces natural bodily fluids with polymers that harden to create odorless and dry “specimens.”

Frozen in place, plastinated remains in the exhibits are rigidly posed – both for dramatic effect and to illustrate specific bodily features. Over 40 million museum visitors have encountered these exhibitions in more than 100 different locations worldwide. Even copycat exhibits have taken off, eschewing accredited museums in favor of places like the Luxor Hotel and Casino in Las Vegas.

But Body Worlds – though seemingly an entirely modern phenomenon only made possible with futuristic plastic technology – emerges from a long tradition of popular exhibits featuring actual and simulated human remains. What continues to draw so many people to human body exhibitions – even today?

Early exhibits of human bodies

For nearly as long as physicians and anatomists have attempted to understand the body, they have attempted to preserve, illustrate and present it. Cabinets of curiosities displayed in the homes of European nobility in the 16th century frequently included human skulls. As civic museums emerged in cities throughout Europe and the United States, some began to formally organize collections around anatomical questions.

The Hyrtl Skull Collection at the Mütter Museum continues to be displayed together. Recently, the museum organized a ‘Save Our Skulls’ fundraising campaign in order to better conserve the collection.
George Widman, 2009, for the Mütter Museum of The College of Physicians of Philadelphia

Medical museums were often more interested in pathologies – abnormal medical conditions or disease. They also collected thousands of skulls and bones, attempting to address basic questions about race. Early on, medical museums were generally closed to the public, instead focusing on training medical students through hands-on experience with specimens. Almost reluctantly, they began opening their doors to the public. Once they did, they were surprised by the relatively large number of visitors curiously entering their galleries.

Medical museums were not the sole institutions housing and displaying remains, however. Collections aimed more squarely at the general public often included such items as well. The Army Medical Museum, for instance, located along the National Mall, exhibited human remains between 1887 and the 1960s (living on as the National Museum of Health and Medicine). The Smithsonian’s National Museum of Natural History built its own large body collections, especially during the early 20th century. Popular exhibits at the American Museum of Natural History exhibited human remains in New York City just steps from Central Park.

Notable exhibits featuring human remains or innovative reproductions were also wildly popular at World’s Fairs, including Chicago (1893), St. Louis (1904) and San Diego (1915), among many others. People crowded galleries even as these exhibits proved vexing to critics.

Troubling transition from person to specimen

In the quest to rapidly build collections, remains were sometimes collected under highly questionable ethical circumstances. Bodies were removed from graves and sold, gathered from hospitals near exhibitions reminiscent of human zoos, and rounded up haphazardly from battlefields.

In the United States, the human body in the late 19th and early 20th century was racialized in almost every respect imaginable. Many people became obsessed with the supposed differentiations between Native Americans, African Americans and European Americans – occasionally stretching claims into rigid hierarchies of humankind. The exhibitions dehumanized bodies by casting them as observable data points rather than actual human beings.

Some exhibits blended medical science and racial science in a bizarrely inaccurate manner. Medical doctors supported eugenics groups organizing temporary exhibits comparing hair and skulls from different apes and nonwhite humans, underscoring popular notions about the supposedly primitive nature of those outside of Western civilization. To our modern eyes, these attempts are obviously stained by scientific racism.

Eventually, the racialized science that had led to collecting thousands of skulls and other bones from people around the world came under increased scrutiny. The comparative study of race – dominating many early displays of human remains – was largely discredited.

Indigenous activists, tired of seeing their ancestors viewed as “specimens,” also began pushing back against their display. Some exhibit planners began seeking other methods – including more sophisticated models – and exhibiting actual human remains became less prominent.

By midcentury it was less common to display actual human remains in museum exhibits. The occasional Egyptian mummy notwithstanding, museum remains were largely relegated behind the scenes to bone rooms.

Specimen exhibits fade, temporarily

With largely unfounded concern, museum administrators, curators and other critics worried audiences would be disgusted when shown vivid details about human anatomy. Gradually, as medical illustrations became better and easier to reproduce in textbooks, the need for demonstrations with real “specimens” seemed to dissipate.

First displayed at a World’s Fair in Chicago in 1933, see-through models of the human body became a favorite attraction at medical exhibits in years to come. Models replicated actual human body parts rather than displaying them in preserved form. Exhibits were sometimes animated with light shows and synchronized lectures.

Later, in the 1960s, new transparent models were created for popular education. Eventually, some of the many transparent medical models wound up in science museums. Although popular, it remains unclear how effective the models were in either teaching visitors or inspiring them to learn more about the human body.

Over the years, methods for teaching anatomy shifted. Many medical museums even closed permanently. Those that could not dispose of collections by destroying them donated or sold them. Human body exhibits generally faded from public consciousness.

But after decades of declining visitor numbers, something surprising started happening at one of the nation’s most important medical museums. The Mütter Museum’s displays continued to draw heavily from its human remains collections even as similar institutions moved away from such exhibits. From the mid-1980s to 2007, the number of visitors entering the Mütter’s galleries grew from roughly 5,000 visitors per year to more than 60,000. Today, the museum is the most visited small museum in Philadelphia, hosting over 130,000 visitors annually.

When Body Worlds began touring museums in the mid-1990s, it tapped into a curiosity in the U.S. that has probably always existed – a fascination with death and the human body.

Adding a gloss of scientization to the dead

People are very often unsettled by seeing what were once living, breathing, human beings – people with emotions and families – turned into scientific specimens intended for public consumption. Despite whatever discomfort emerges, however, the curious appeal of medicalized body displays at public museums lingers, enough so to make them consistently appealing as fodder for popular exhibitions.

Body Worlds states “health education” is its “primary goal,” elaborating that the bodies in exhibits are posed to suggest that we as humans are “naturally fragile in a mechanized world.”

The exhibits are partially successful in achieving that mission. In tension with the message about human fragility, though, is the desire to preserve them by preventing their natural decay through technology.

With public schools cutting health programs in classrooms around the United States, it stands to reason people might seek this kind of body knowledge elsewhere. Models are never quite as uniquely appealing as actual flesh and bone.

But while charged emotional responses have the potential to heighten curiosity, they can also inhibit learning. While museum administrators voiced concern that visitors would be horrified viewing actual human bodies on exhibit, the public has instead proven to have an almost insatiable thirst for seeing scientized dead.

In the face of this popularity, museums must fully consider the special implications and problems with these exhibitions when choosing to display human bodies.

One basic concern relates to the exact origins of these bodies. Criticisms elicited an official response from von Hagens. Major ethical differences exist between exhibitions including human remains where permission has been granted in advance by the deceased or through descendants and museum displays revealing bodies of individuals offered no choice in the matter.

Spiritually sacred objects and the remains of past people present unique issues which must be dealt with sensitively and on an individual basis. Cultural and historical context is important. Consulting with living ancestors is critical.

Exhibitors also need to do more to put these displays into greater historical context for visitors. Without it, visitors might mistake artfully posed cadavers as art pieces, which they most assuredly are not.

These are all issues we will likely be grappling with for years to come. If past history is suggestive of future trends, visitors will continue to be drawn to these exhibits as long as the human body remains mysterious and alluring.

The Conversation

Samuel Redman, Assistant Professor of History, University of Massachusetts Amherst

This article was originally published on The Conversation. Read the original article.

How bad is double-dipping?? Paul Dawson explains @US_Conversation

Is double-dipping a food safety problem or just a nasty habit?

Paul Dawson, Clemson University

What do you do when you are left with half a chip in your hand after dipping? Admit it, you’ve wondered whether it’s OK to double dip the chip.

Maybe you’re the sort who dips their chip only once. Maybe you look around the room before loading your half-eaten chip with a bit more dip, hoping that no one will notice.

If you’ve seen that classic episode of Seinfeld, “The Implant,” where George Costanza double-dips a chip at a wake, maybe you’ve wondered if double-dipping is really like “putting your whole mouth right in the dip!

‘You doubled-dipped the chip.’

But is it, really? Can the bacteria in your mouth make it onto the chip then into the dip? Is this habit simply bad manners, or are you actively contaminating communal snacks with your particular germs?

This question intrigued our undergraduate research team at Clemson University, so we designed a series of experiments to find out just what happens when you double-dip. Testing to see if there is bacterial transfer seems straightforward, but there are more subtle questions to be answered. How does the acidity of the dip affect bacteria, and do different dips affect the outcome? Members of the no-double-dipping enforcement squad, prepare to have your worst, most repulsive suspicions confirmed.

Start with a cracker

Presumably some of your mouth’s bacteria transfer to a food when you take a bite. But the question of the day is whether that happens, and if so, how much bacteria makes it from mouth to dip. Students started by comparing bitten versus unbitten crackers, measuring how much bacteria could transfer from the cracker to a cup of water.

We found about 1,000 more bacteria per milliliter of water when crackers were bitten before dipping than solutions where unbitten crackers were dipped.

In a second experiment, students tested bitten and unbitten crackers in water solutions with pH levels typical of food dips (pH levels of 4, 5 and 6, which are all toward the more acidic end of the pH scale). They tested for bacteria right after the bitten and unbitten crackers were dipped, then measured the solutions again two hours later. More acidic solutions tended to lower the bacterial numbers over time.

The time had come to turn our attention to real food.

But what about the dip?

We compared three kinds of dip: salsa, chocolate and cheese dips, which happen to differ in pH and thickness (viscosity). Again, we tested bacterial populations in the dips after already-bitten crackers were dipped, and after dipping with unbitten crackers. We also tested the dips two hours after dipping to see how bacterial populations were growing.

We tested All Natural Tostitos Chunky Hot Salsa (pH 4), Genuine Chocolate Flavor Hershey’s Syrup (pH 5.3) and Fritos Mild Cheddar Flavor Cheese Dip (pH 6.0).

So, how dirty is your dip? We found that in the absence of double-dipping, our foods had no detectable bacteria present. Once subjected to double-dipping, the salsa took on about five times more bacteria (1,000 bacteria/ml of dip) from the bitten chip when compared to chocolate and cheese dips (150-200 bacteria/ml of dip). But two hours after double-dipping, the salsa bacterial numbers dropped to about the same levels as the chocolate and cheese.

After two hours, levels of bacteria in the salsa were similar to levels in the cheese and chocolate dips.
Paul Dawson, Author provided

We can explain these phenomena using some basic food science. Chocolate and cheese dips are both pretty thick. Salsa isn’t as thick. The lower viscosity means that more of the dip touching the bitten cracker falls back into the dipping bowl rather than sticking to the cracker. And as it drops back into the communal container, it brings with it bacteria from the mouth of the double-dipper.

Salsa is also more acidic. After two hours, the acidity of the salsa had killed some of the bacteria (most bacteria don’t like acid). So it’s a combination of viscosity and acidity that will determine how much bacteria gets into the dip from double-dipping. As a side note about party hosting: cheese dip will run out faster than salsa since more of the cheese sticks to the cracker or chip on each dip. That could reduce the chances of people double-dipping. And yes, this is something we discovered during the experiment.

Should I freak out about double-dipping?

Double-dipping can transfer bacteria from mouth to dip, but is this something you need to worry about?

Anywhere from hundreds to thousands of different bacterial types and viruses live in the human oral cavity, most of which are harmless. But some aren’t so good. Pneumonic plague, tuberculosis, influenza virus, Legionnaires’ disease and severe acute respiratory syndrome (SARS) are known to spread through saliva, with coughing and sneezing aerosolizing up to 1,000 and 3,600 bacterial cells per minute. These tiny germ-containing droplets from a cough or a sneeze can settle on surfaces such as desks and doorknobs. Germs can be spread when a person touches a contaminated surface and then touches their eyes, nose or mouth.

That’s why the Centers for Disease Control and Prevention strongly recommends covering the mouth and nose when coughing and sneezing to prevent spreading “serious respiratory illnesses like influenza, respiratory syncytial virus (RSV), whooping cough, and severe acute respiratory syndrome (SARS).” With that in mind, there may be a concern over the spread of oral bacteria from person to person thanks to double-dipping. And a person doesn’t have to be sick to pass on germs.

One of the most infamous examples of spreading disease while being asymptomatic is household cook Mary Mallon (Typhoid Mary), who spread typhoid to numerous families in 19th-century New England during food preparation. Science has left unanswered whether she was tasting the food as she went along and, in effect, double-dipping. Typhoid Mary is obviously an extreme example, but your fellow dippers might very well be carrying cold or flu germs and passing them right into the bowl you’re about to dig into.

If you detect double-dippers in the midst of a festive gathering, you might want to steer clear of their favored snack. And if you yourself are sick, do the rest of us a favor and don’t double-dip.

The Conversation

Paul Dawson, Professor of Food Science, Clemson University

This article was originally published on The Conversation. Read the original article.

The moral case on climate change! explained by Lawrence Torcello @US_Conversation #COP21

Making the moral case on climate change ahead of Paris summit

Lawrence Torcello, Rochester Institute of Technology

Much of the general public is well aware of scientists’ recommendations on climate change. In particular, climate scientists and other academics say society needs to keep global temperatures to no more than two degrees Celsius below preindustrial levels to avoid the most dangerous effects of climate change.

But now more academics are weighing in on climate change: philosophers, ethicists, and social scientists among others.

More than 2,100 academics, and counting, from over 80 nations and a diversity of disciplines have endorsed a moral and political statement addressed to global leaders ahead of December’s UN climate conference in Paris.

A few of the more widely recognizable signatories include philosopher and linguist Noam Chomsky (MIT); cognitive scientist Stephan Lewandowsky (University of Bristol); climate scientist Michael E Mann (PSU); writer and environmentalist Bill McKibben (Middlebury College); historian of science Naomi Oreskes (Harvard); and moral philosopher Peter Singer (Princeton).

As one of the philosophers responsible for this open letter, along with my colleague Keith Horton (University of Wollongong, Australia), I wish to explain why we felt compelled to organize it and why the endorsement of many influential philosophers is important.

In addition to Chomsky and Singer, the list of prominent philosophers who have converged from various philosophical backgrounds and points of disagreement to endorse this letter include many of the most influential figures in contemporary moral and political philosophy.

Thinking about the real world

While it may be popular among certain politicians to malign academics as removed from the “real world,” the fact remains that academics by virtue of training and professional necessity are driven to distinguish valid argument and sound evidence from fallacy.

We are bound to reference current research, and to examine our data before making claims if we hope to be taken seriously by our peers. We have a pedagogical obligation to instill these same practices in our students. We also have a moral obligation to prepare them for responsible citizenship and careers.

Global warming is the most important moral issue of our time, and arguably the greatest existential threat that human beings, as a whole, have faced. So the response to climate change from philosophers should be no surprise.

Those most responsible for climate change are relatively few compared to the vast numbers of people who will be harmfully affected. Indeed, climate change will, in one way or another, impact all life on Earth.

If we fail to decisively address the problem now, warming may escalate in a relatively short time beyond the point which human beings can reasonably be expected to cope, given the nature of reinforcing feedback effects.

The moral implications are enormous, and this letter represents the closest we have to a consensus statement from the world’s preeminent professional ethicists on some of the moral obligations industrial nations, and their leaders, have to global communities, future generations, and fellow species. The letter begins:

Some issues are of such ethical magnitude that being on the correct side of history becomes a signifier of moral character for generations to come. Global warming is such an issue. Indigenous peoples and the developing world are least responsible for climate change, least able to adapt to it, and most vulnerable to its impacts. As the United Nations Climate Conference in Paris approaches, the leaders of the industrialized world shoulder a grave responsibility for the consequences of our current and past carbon emissions.

Importantly, the letter points out that even if current nonbinding pledges being offered by world leaders ahead of the conference are achieved, we remain on course to reach potentially catastrophic levels of warming by the end of this century. The letter continues:

This is profoundly shocking, given that any sacrifice involved in making those reductions is far overshadowed by the catastrophes we are likely to face if we do not: more extinctions of species and loss of ecosystems; increasing vulnerability to storm surges; more heatwaves; more intense precipitation; more climate related deaths and disease; more climate refugees; slower poverty reduction; less food security; and more conflicts worsened by these factors. Given such high stakes, our leaders ought to be mustering planet-wide mobilization, at all societal levels, to limit global warming to no more than 1.5 degree Celsius.

It is increasingly obvious as we head to Paris that both industrialized and developing nations must make serious efforts to limit their greenhouse gas emissions beyond their current pledges. This is a requirement of physics.

It is unrealistic to expect most developing nations to meaningfully limit greenhouse gas emissions without binding pledges from industrialized nations to do so, as well as significant commitments to provide financial and technological assistance to poorer nations facing developmental challenges. This is a practical necessity and a requirement of ethics.

Ethical thinking

At its most fundamental level, thinking ethically means taking the interests of others seriously enough to recognize when our actions and omissions must be justified to them.

As individuals, our instincts too often drive us toward self-interest. Consequently, acting ethically beyond the circle of our immediate relations – that is, those we perceive most capable of reciprocating both harms and benefits – is difficult.

Still, the history of our species teaches that humanity as a whole benefits most when we are able to put narrow self-interest aside, and make an ethical turn in our thinking and behavior.

Now, faced with climate change the next great ethical turn in our thinking and behavior can’t come soon enough. We will make progress in addressing climate change when, and if, we begin taking the lessons of morality seriously.

The Conversation

Lawrence Torcello, Assistant Professor of Philosophy, Rochester Institute of Technology

This article was originally published on The Conversation. Read the original article.

Did computers break #science? Ben Marwick thinks so and knows how to fix it! – @US_Conversation

How computers broke science – and what we can do to fix it

Ben Marwick, University of Washington

Reproducibility is one of the cornerstones of science. Made popular by British scientist Robert Boyle in the 1660s, the idea is that a discovery should be reproducible before being accepted as scientific knowledge.

In essence, you should be able to produce the same results I did if you follow the method I describe when announcing my discovery in a scholarly publication. For example, if researchers can reproduce the effectiveness of a new drug at treating a disease, that’s a good sign it could work for all sufferers of the disease. If not, we’re left wondering what accident or mistake produced the original favorable result, and would doubt the drug’s usefulness.

For most of the history of science, researchers have reported their methods in a way that enabled independent reproduction of their results. But, since the introduction of the personal computer – and the point-and-click software programs that have evolved to make it more user-friendly – reproducibility of much research has become questionable, if not impossible. Too much of the research process is now shrouded by the opaque use of computers that many researchers have come to depend on. This makes it almost impossible for an outsider to recreate their results.

Recently, several groups have proposed similar solutions to this problem. Together they would break scientific data out of the black box of unrecorded computer manipulations so independent readers can again critically assess and reproduce results. Researchers, the public, and science itself would benefit.

Computers wrangle the data, but also obscure it

Statistician Victoria Stodden has described the unique place personal computers hold in the history of science. They’re not just an instrument – like a telescope or microscope – that enables new research. The computer is revolutionary in a different way; it’s a tiny factory for producing all kinds of new “scopes” to see new patterns in scientific data.

It’s hard to find a modern researcher who works without a computer, even in fields that aren’t intensely quantitative. Ecologists use computers to simulate the effect of disasters on animal populations. Biologists use computers to search massive amounts of DNA data. Astronomers use computers to control vast arrays of telescopes, and then process the collected data. Oceanographers use computers to combine data from satellites, ships and buoys to predict global climates. Social scientists use computers to discover and predict the effects of policy or to analyze interview transcripts. Computers help researchers in almost every discipline identify what’s interesting within their data.

Computers also tend to be personal instruments. We typically have exclusive use of our own, and the files and folders it contains are generally considered a private space, hidden from public view. Preparing data, analyzing it, visualizing the results – these are tasks done on the computer, in private. Only at the very end of the pipeline comes a publicly visible journal article summarizing all the private tasks.

The problem is that most modern science is so complicated, and most journal articles so brief, it’s impossible for the article to include details of many important methods and decisions made by the researcher as he analyzed his data on his computer. How, then, can another researcher judge the reliability of the results, or reproduce the analysis?

How much transparency do scientists owe?

Stanford statisticians Jonathan Buckheit and David Donoho described this issue as early as 1995, when the personal computer was still a fairly new idea.

An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.

They make a radical claim. It means all those private files on our personal computers, and the private analysis tasks we do as we work toward preparing for publication should be made public along with the journal article.

This would be a huge change in the way scientists work. We’d need to prepare from the start for everything we do on the computer to eventually be made available for others to see. For many researchers, that’s an overwhelming thought. Victoria Stodden has found the biggest objection to sharing files is the time it takes to prepare them by writing documentation and cleaning them up. The second biggest concern is the risk of not receiving credit for the files if someone else uses them.

A new toolbox to enhance reproducibility

Recently, several different groups of scientists have converged on recommendations for tools and methods to make it easier to keep track of files and analyses done on computers. These groups include biologists, ecologists, nuclear engineers, neuroscientists, economists and political scientists. Manifesto-like papers lay out their recommendations. When researchers from such different fields converge on a common course of action, it’s a sign a major watershed in doing science might be under way.

One major recommendation: minimize and replace point-and-click procedures during data analysis as much as possible by using scripts that contain instructions for the computer to carry out. This solves the problem of recording ephemeral mouse movements that leave few traces, are difficult to communicate to other people, and hard to automate. They’re common during data cleaning and organizing tasks using a spreadsheet program like Microsoft Excel. A script, on the other hand, contains unambiguous instructions that can be read by its author far into the future (when the specific details have been forgotten) and by other researchers. It can also be included within a journal article, since they aren’t big files. And scripts can easily be adapted to automate research tasks, saving time and reducing the potential for human error.

We can see examples of this in microbiology, ecology, political science and archaeology. Instead of mousing around menus and buttons, manually editing cells in a spreadsheet and dragging files between several different software programs to obtain results, these researchers wrote scripts. Their scripts automate the movement of files, the cleaning of the data, the statistical analysis, and the creation of graphs, figures and tables. This saves a lot of time when checking the analysis and redoing it to explore different options. And by looking at the code in the script file, which becomes part of the publication, anyone can see the exact steps that produced the published results.

Other recommendations include the use of common, nonproprietary file formats for storing files (such as CSV, or comma separated variables, for tables of data) and simple rubrics for systematically organizing files into folders to make it easy for others to understand how the information is structured. They recommend free software that is available for all computer systems (eg. Windows, Mac, and Linux) for analyzing and visualizing data (such as R and Python). For collaboration, they recommend a free program called Git, that helps to track changes when many people are editing the same document.

Currently, these are the tools and methods of the avant-garde, and many midcareer and senior researchers have only a vague awareness of them. But many undergraduates are learning them now. Many graduate students, seeing personal advantages to getting organized, using open formats, free software and streamlined collaboration, are seeking out training and tools from volunteer organizations such as Software Carpentry, Data Carpentry and rOpenSci to fill the gaps in their formal training. My university recently created an eScience Institute, where we help researchers adopt these recommendations. Our institute is part of a bigger movement that includes similar institutes at Berkeley and New York University.

As students learning these skills graduate and progress into positions of influence, we’ll see these standards become the new normal in science. Scholarly journals will require code and data files to accompany publications. Funding agencies will require they be placed in publicly accessible online repositories.

Example of a script used to analyze data.
Author provided

Open formats and free software are a win/win

This change in the way researchers use computers will be beneficial for public engagement with science. As researchers become more comfortable sharing more of their files and methods, members of the public will have much better access to scientific research. For example, a high school teacher will be able to show students raw data from a recently published discovery and walk the students through the main parts of the analysis, because all of these files will be available with the journal article.

Similarly, as researchers increasingly use free software, members of the public will be able to use the same software to remix and extend results published in journal articles. Currently many researchers use expensive commercial software programs, the cost of which makes them inaccessible to people outside of universities or large corporations.

Of course, the personal computer is not the sole cause of problems with reproducibility in science. Poor experimental design, inappropriate statistical methods, a highly competitive research environment and the high value placed on novelty and publication in high-profile journals are all to blame.

What’s unique about the role of the computer is that we have a solution to the problem. We have clear recommendations for mature tools and well-tested methods borrowed from computer science research to improve the reproducibility of research done by any kind of scientist on a computer. With a small investment of time to learn these tools, we can help restore this cornerstone of science.

The Conversation

Ben Marwick, Associate Professor of Archaeology, University of Washington

This article was originally published on The Conversation. Read the original article.