Today’s Astronomy Picture of the Day is more than a picture… actually a video!!! “How different does the universe look on small, medium, and large scales?”
Powers of Ten takes us on an adventure in magnitudes. Starting at a picnic by the lakeside in Chicago, this famous film transports us to the outer edges of the universe. Every ten seconds we view the starting point from ten times farther out until our own galaxy is visible only a s a speck of light among many others. Returning to Earth with breathtaking speed, we move inward- into the hand of the sleeping picnicker- with ten times more magnification every ten seconds. Our journey ends inside a proton of a carbon atom within a DNA molecule in a white blood cell. POWERS OF TEN © 1977 EAMES OFFICE LLC (Available at http://www.eamesoffice.com)
Today I posted that the European Space Agency’s landing of Philae on Comet 67P made science history. But, I was wrong. The Philae Lander and Rosetta Spacecraft Mission has made history for HUMANKIND!!! The Philae lander is a huge step forward for space technology and science! It is also just plain exciting!
ESA (@esa) November 12, 2014
One of the coolest parts about the ESA Rosetta Mission, is that the team of scientists and engineers in charge of the Comet Landing included WOMEN! Compare this to the team of NASA scientists and engineers that sent astronauts to the moon (JoAnn Hardin Morgan was the single woman engineer at NASA during Apollo 11). However, the Rosetta Mission is not the first time women have contributed to amazing things in space. Check out Beverly Wettenstein’s long list of incredible contributions women have made in space!
The ESA Rosetta Mission included at least four women who are listed as team members, but I would guess there are many more who contributed but are not listed!
It takes hundreds of people — machinists, engineers, scientists, and many others — to get a spacecraft from the planning stages to its destination in outer space. The people in this gallery represent just a few of the folks who make space exploration ideas a reality.
Let’s celebrate Claudia Alexander (U.S. Rosetta Project Scientist), Margaret Frerking (Co-I with MIRO instrument), Lori Feaga, (ALICE Co-I with University of Maryland), Marilia Samara (ScRI, EIS instrument), and the many other women who contributed to the Rosetta Mission. CauseScience applauds all of these women for their amazing success today, and over the last decade of the mission. These women are the best at what they do, and break down barriers for girls and women in Science, Technology, Engineering, and Math!! CONGRATS!!
Max Mutchler (@maxmutchler) November 12, 2014
ConneXions (@ConneXionsGCI) November 12, 2014
Please inform CauseScience if you know other women that were part of ESA’s Rosetta Mission so we can add their names!
Add Professor Monica Grady to the list of Rosetta women!!
You may have read about shirtgate, and how Rosetta Project Scientist Matt Taylor has been ridiculed on twitter for his sexist and embarassing choice of clothing. While it is certainly important to draw attention to his harmful behavior, celebrating the amazing women that contributed to the history of HUMANKIND is much more important!!
Andy Borowitz once again has CauseScience laughing hysterically with a new Borowitz Report about a ‘study’ showing that fear-bola is most common amongst people that didn’t pay attention in science and math class. The post is a terrific commentary about the ignorance of many Americans about the actual risk of being infected with ebola, which is extremely low.
According to the study, those whose minds were elsewhere while being taught certain concepts, like what a virus is and numbers, are at a significantly greater risk of being afraid of catching Ebola than people who were paying even scant attention.
For example, when a participant of the study was told that he had a one-in-thirteen-million chance of contracting the virus, his response was, “Whoa. Thirteen million is a really big number. That is totally scary.”
Clearing up confusion between correlation and causation
UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Today we look at the dangers of making a link between unrelated results.
Here’s an historical tidbit you may not be aware of. Between the years 1860 and 1940, as the number of Methodist ministers living in New England increased, so too did the amount of Cuban rum imported into Boston – and they both increased in an extremely similar way. Thus, Methodist ministers must have bought up lots of rum in that time period!
Actually no, that’s a silly conclusion to draw. What’s really going on is that both quantities – Methodist ministers and Cuban rum – were driven upwards by other factors, such as population growth.
In reaching that incorrect conclusion, we’ve made the far-too-common mistake of confusing correlation with causation.
What’s the difference?
Two quantities are said to be correlated if both increase and decrease together (“positively correlated”), or if one increases when the other decreases and vice-versa (“negatively correlated”).
Correlation is readily detected through statistical measurements of the Pearson’s correlation coefficient, which indicates how tightly locked together the two quantities are, ranging from -1 (perfectly negatively correlated) through 0 (not at all correlated) and up to 1 (perfectly positively correlated).
But just because two quantities are correlated does not necessarily mean that one is directly causing the other to change. Correlation does not imply causation, just like cloudy weather does not imply rainfall, even though the reverse is true.
If two quantities are correlated then there might well be a genuine cause-and-effect relationship (such as rainfall levels and umbrella sales), but maybe other variables are driving both (such as pirate numbers and global warming), or perhaps it’s just coincidence (such as US cheese consumption and strangulations-by-bedsheet).
Even where causation is present, we must be careful not to mix up the cause with the effect, or else we might conclude, for example, that an increased use of heaters causes colder weather.
In order to establish cause-and-effect, we need to go beyond the statistics and look for separate evidence (of a scientific or historical nature) and logical reasoning. Correlation may prompt us to go looking for such evidence in the first place, but it is by no means a proof in its own right.
Although the above examples were obviously silly, correlation is very often mistaken for causation in ways that are not immediately obvious in the real world. When reading and interpreting statistics, one must take great care to understand exactly what the data and its statistics are implying – and more importantly, what they are not implying.
One recent example of the need for caution in interpreting data is the excitement earlier this year surrounding the apparent groundbreaking detection of gravitational waves – an announcement that appears to have been made prematurely, before all the variables that were affecting the data were accounted for.
Unfortunately, analysing statistics, probabilities and risks is not a skill set wired into our human intuition, and so is all too easy to be led astray. Entire books have been written on the subtle ways in which statistics can be misinterpreted (or used to mislead). To help keep your guard up, here are some common slippery statistical problems that you should be aware of:
1) The Healthy Worker Effect, where sometimes two groups cannot be directly compared on a level playing field.
Consider a hypothetical study comparing the health of a group of office-workers with the health of a group of astronauts. If the study shows no significant difference between the two – no correlation between healthiness and working environment – are we to conclude that living and working in space carries no long-term health risks for astronauts?
No! The groups are not on the same footing: the astronaut corps screen applicants to find healthy candidates, who then maintain a comprehensive fitness regime in order to proactively combat the effects of living in “microgravity”.
We would therefore expect them to be significant healthier than office workers, on average, and should rightly be concerned if they were not.
2) Categorisation and the Stage Migration Effect – shuffling people between groups can have dramatic effects on statistical outcomes.
This is also known as the Will Rogers effect, after the US comedian who reportedly quipped:
When the Okies left Oklahoma and moved to California, they raised the average intelligence level in both states.
To illustrate, imagine dividing a large group of friends into a “short” group and a “tall” group (perhaps in order to arrange them for a photo). Having done so, it’s surprisingly easy to raise the average height of both groups at once.
Simply ask the shortest person in the “tall” group to switch over to the “short” group. The “tall”‘ group lose their shortest member, thus bumping up their average height – but the “short” group gain their tallest member yet, and thus also gain in average height.
This has major implications in medical studies, where patients are often sorted into “healthy” or “unhealthy” groups in the course of testing a new treatment. If diagnostic methods improve, some very-slightly-unhealthy patients may be recategorised – leading to the health outcomes of both groups improving, regardless of how effective (or not) the treatment is.
3) Data mining – when an abundance of data is present, bits and pieces can be cherry-picked to support any desired conclusion.
This is bad statistical practice, but if done deliberately can be hard to spot without knowledge of the original, complete data set.
Consider the above graph showing two interpretations of global warming data, for instance. Or fluoride – in small amounts it is one of the most effective preventative medicines in history, but the positive effect disappears entirely if one only ever considers toxic quantities of fluoride.
For similar reasons, it is important that the procedures for a given statistical experiment are fixed in place before the experiment begins and then remain unchanged until the experiment ends.
4) Clustering – which is to be expected even in completely random data.
Consider a medical study examining how a particular disease, such as cancer or Multiple sclerosis, is geographically distributed. If the disease strikes at random (and the environment has no effect) we would expect to see numerous clusters of patients as a matter of course. If patients are spread out perfectly evenly, the distribution would be most un-random indeed!
So the presence of a single cluster, or a number of small clusters of cases, is entirely normal. Sophisticated statistical methods are needed to determine just how much clustering is required to deduce that something in that area might be causing the illness.
Unfortunately, any cluster at all – even a non-significant one – makes for an easy (and at first glance, compelling) news headline.
Statistical analysis, like any other powerful tool, must be used very carefully – and in particular, one must always be careful when drawing conclusions based on the fact that two quantities are correlated.
Instead, we must always insist on separate evidence to argue for cause-and-effect – and that evidence will not come in the form of a single statistical number.
The bad news is that our evolution equipped us to live in small, stable, hunter-gatherer societies. We are Pleistocene people, but our languaged brains have created massive, multicultural, technologically sophisticated and rapidly changing societies for us to live in.
In consequence, we must constantly resist the temptation to see meaning in chance and to confuse correlation and causation.
This article is part of a series on Understanding Research.
Why research beats anecdote in our search for knowledge
The authors do not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. They also have no relevant affiliations.
vox.com reports it won’t happen again until AUGUST 13, 2049.
Specifically, the full moon occurs at precisely 12:11 am EDT on Thursday night — i.e., early Friday morning. That means the 13th will coincide with the full moon for residents of the Eastern time zone (as well as South America, Europe, Africa, and Asia), but not the Central, Mountain, or Pacific time zones.