Unusually loose skin helps hagfish survive shark attacks

NEW ORLEANS, La. – Skin that mostly hangs loose around hagfishes proves handy for living through a shark attack or wriggling through a crevice.

The skin on hagfishes’ long, sausage-style bodies is attached in a line down the center of their backs and in flexible connections where glands release slime, explained Douglas Fudge of Chapman University in Orange, Calif. This floating skin easily slip-slides in various directions. A shark tooth can puncture the skin but not stab into the muscle below. And a shark attack is just one of the crises when loose skin can help, Fudge reported January 5 at the annual meeting of the Society for Integrative and Comparative Biology.
Hagfishes can fend off an attacking shark by quick-releasing a cloud of slime. Yet video of such events shows that a shark can land a bite before getting slimed. To figure out how hagfishes might survive such wounds, Fudge and colleagues used an indoor guillotine to drop a large mako shark tooth into hagfish carcasses. With the skin in its naturally loose state, the tooth readily punched through skin but slipped away from stabbing into the body of either the Atlantic (Myxine glutinosa) or Pacific (Eptatretus stoutii) hagfish species.
But when the researchers glued the skin firmly to the hagfish muscle so the skin couldn’t slip, the tooth typically plunged into inner tissue. For comparison, the researchers tested lampreys, which are similarly tube-shaped but with skin well-fastened to their innards. When the guillotine dropped on them, the tooth often stabbed directly into flesh.
The finding makes sense to Theodore Uyeno of Valdosta State University in Georgia, whose laboratory work suggests how loose skin might work in minimizing damage from shark bites. He and colleagues have tested how hard it is to puncture swatches of skin from both the Atlantic and Pacific species. As is true for many other materials, punching through a swatch of hagfish skin held taut didn’t take as long as punching through skin patches allowed to go slack, he said in a January 5 presentation at the meeting. Even a slight delay when a sharp point bears down on baggy skin might allow the hagfish to start dodging and sliming.

But Michelle Graham, who studies locomotion in flying snakes at Virginia Tech, wondered if puncture wounds would be a drawback to such a defense. A hagfish that avoids a deep stab could still lose blood from the skin puncture. That’s true, said Fudge, but the loss doesn’t seem to be great. Hagfish have unusually low blood pressure, and video of real attacks doesn’t show great gushes.

Hagfish blood also plays a part in another benefit of loose skin — an unusual ability to wiggle through cracks, Fudge reported in a second talk at the meeting. One of his students built an adjustable crevice and found that both Atlantic and Pacific hagfishes can contort themselves through slits only half as wide as their original body diameter. Videos show skin bulging out to the rear as the strong pinch of the opening forces blood backward.

The cavity just under a hagfish’s skin can hold roughly a third of its blood. Forcing that reservoir backward can help shrink the body diameter. Fortunately the inner body tapers at the end, Fudge said. So as blood builds up, “they don’t explode.”

Monsoon deluges turned ancient Sahara green

Thousands of years ago, it didn’t just rain on the Sahara Desert. It poured.

Grasslands, trees, lakes and rivers once covered North Africa’s now arid, unforgiving landscape. From about 11,000 to 5,000 years ago, much higher rainfall rates than previously estimated created that “Green Sahara,” say geologist Jessica Tierney of the University of Arizona in Tucson and her colleagues. Extensive ground cover, combined with reductions of airborne dust, intensified water evaporation into the atmosphere, leading to monsoonlike conditions, the scientists report January 18 in Science Advances.
Tierney’s team reconstructed western Saharan rainfall patterns over the last 25,000 years. Estimates relied on measurements of forms of carbon and hydrogen in leaf wax recovered from ocean sediment cores collected off the Sahara’s west coast. Concentrations of these substances reflected ancient rainfall rates.

Rainfall ranged from 250 to 1,670 millimeters annually during Green Sahara times, the researchers say. Previous estimates — based on studies of ancient pollen that did not account for dust declines — reached no higher than about 900 millimeters. Saharan rainfall rates currently range from 35 to 100 millimeters annually.

Leaf-wax evidence indicates that the Green Sahara dried out from about 8,000 to at least 7,000 years ago before rebounding. That’s consistent with other ancient climate simulations and with excavations suggesting that humans temporarily left the area around 8,000 years ago. Hunter-gatherers departed for friendlier locales, leaving cattle herders to spread across North Africa once the Green Sahara returned (SN Online: 6/20/12), the investigators propose.

Snooze patterns vary across cultures, opening eyes to evolution of sleep

Hunter-gatherers and farming villagers who live in worlds without lightbulbs or thermostats sleep slightly less at night than smartphone-toting city slickers, researchers say.

“Contrary to conventional wisdom, people in societies without electricity do not sleep more than those in industrial societies like ours,” says UCLA psychiatrist and sleep researcher Jerome Siegel, who was not involved in the new research.

Different patterns of slumber and wakefulness in each of these groups highlight the flexibility of human sleep — and also point to potential health dangers in how members of Western societies sleep, conclude evolutionary biologist David Samson of Duke University and colleagues. Compared with other primates, human evolution featured a shift toward sleeping more deeply over shorter time periods, providing more time for learning new skills and knowledge as cultures expanded, the researchers propose. Humans also evolved an ability to revise sleep schedules based on daily work schedules and environmental factors such as temperature.
Samson’s team describes sleep patterns in 33 East African Hadza hunter-gatherers over a total of 393 days in a paper published online January 7 in the American Journal of Physical Anthropology. The team’s separate report on slumber among 21 rural farmers in Madagascar over 292 days will appear later this year in the American Journal of Human Biology.

Sleep patterns in these groups were tracked with wrist devices that measure a person’s activity levels. Both Hadza and Malagasy volunteers slept an average of about 6.5 hours nightly, less than the about seven-hour average for most U.S. adults. Foragers and villagers, who slept in areas with various family and group members, awoke more frequently during the night than has been reported among Westerners. Scalp electrodes worn at night by nine villagers during nine nights revealed biological signs of relatively light sleep compared with Westerners, including shorter periods of slow-wave and rapid eye movement sleep.
But Hadza and Malagasy individuals often supplemented nighttime sleep with one or two daytime naps. Shut-eye breaks averaged 47.5 minutes for the Hadza and about 55 minutes for villagers. Critically, Samson says, foragers and villagers displayed more consistent daily cycles of sleep and wakefulness than are characteristic of Westerners. Hadza adults tended to hit the sack — or, more commonly, the mat — shortly after midnight and nap in the early afternoon. Malagasy villagers napped once or twice during the day’s hottest hours, usually starting around noon, and retired in the early evening. At night, they slept in two phases, awakening for around an hour shortly after midnight. Historical accounts describe a similar sleep pattern among Western Europeans between 500 and 200 years ago — two sleep segments, divided by a period of activity or reflection (SN: 9/25/99, p. 205).
Nighttime sleep in both populations became deeper and less fragmented as tropical humidity dipped.

Researchers also noted that hunter-gatherers and villagers got plenty of direct sunlight, unlike many Westerners. Several studies have linked inconsistent sleep-wake cycles and lack of sun exposure to health problems, including inflammation and heart problems, Samson says. “People in modern societies can take lessons from this research by attempting to get lots of light exposure during the day while reducing blue-wave light exposure after dark and dropping inside temperatures by a few degrees at night.” Smartphones and other digital devices emit blue-wave light, which can suppress melatonin production and delay sleep.

Effects of wayward sleep patterns or too little sleep on health vary across cultures and regions, says biomedical anthropologist Kristen Knutson of Northwestern University Feinberg School of Medicine in Chicago. For instance, sleeping less than six hours per night may increase appetite, as some studies suggest, but a sleep-deprived office worker surrounded by fast-food joints is more likely to become obese than a physically active hunter-gatherer faced with a limited food supply.

Samson’s research aligns with previous evidence, conducted by Knutson, that rural Haitians living without electricity sleep an average of about seven hours nightly. In addition, Siegel’s team recently reported that nightly sleep averages 5.7 to 7.1 hours in three hunter-gatherer societies, including the Hadza (SN: 11/14/15, p. 10).

‘Cannibalism’ chronicles grisly science of eating your own

Until recently, researchers thought cannibalism took place only among a few species in the animal kingdom and only under extraordinary circumstances. But as zoologist Bill Schutt chronicles in Cannibalism, plenty of creatures inhabit their own version of a dog-eat-dog world.

Over the last few decades, scientists have observed cannibalism — defined by Schutt as eating all or part of another individual of the same species — among all major groups of vertebrates. The practice seems to be even more prevalent, and less discriminating, among invertebrates such as mollusks, insects and spiders, whose eggs, larvae and young are often produced in profusion and are therefore readily available, not to mention nutritious.
Cannibalism, Schutt contends, makes perfect evolutionary sense, and not merely as a feeding strategy. When food supplies are low or living conditions are crowded, some mammals and birds may eat some or all of their offspring to terminate an expenditure of effort with poor chances of paying off. For birds, eating a dead or dying hatchling also is a way to get rid of a carcass that could spread infection or whose scent could attract flies or predators to the nest.

Switching to a historical and cultural perspective, Schutt tackles the various forms of human cannibalism, where, he admits, “the ick factor is high.” That includes medicinal cannibalism, from 17th and 18th century Europeans’ consumption of powdered mummies to modern moms’ ingestion of their newborns’ placentas to purportedly restore nutrients lost during childbirth. The author also explores survival cannibalism (think famine victims, people under siege, plane-crash survivors and the ill-fated Donner Party) and briefly addresses our natural shock and seemingly unnatural fascination with criminal cannibalism (à la Jeffrey Dahmer).

As Schutt explains, ritual cannibalism — the consumption of a foe or loved one to acquire the decedent’s strength, courage or wisdom — is a practice that has apparently taken place in different cultures throughout history. In an interesting aside, Schutt ponders whether people who consume wafers and wine during Communion, especially those who firmly believe these items are literally converted into the body and blood of Christ, are engaging in a form of ritual cannibalism.

Cannibalism is a wide-ranging, engaging and thoroughly fun read. The author’s numerous field trips and lab visits with scientists who study the phenomenon heartily enrich this captivating book.

Human gene editing therapies are OK in certain cases, panel advises

Human gene editing to prevent genetic diseases from being passed to future generations may be permissible under certain conditions, a panel of experts says.

Altering DNA in germline cells — embryos, eggs, and sperm, or cells that give rise to them — may be used to cure genetic diseases for future generations, provided it is done only to correct disease or disability, not to enhance people’s health or abilities, a report issued February 14 by the National Academies of Sciences and Medicine recommends. The decision contradicts earlier recommendations by organizers of a global summit on human gene editing, who concluded that gene editing with molecular scissors such as CRISPR/Cas9 should not be used to produce babies (SN: 12/26/15, p. 12).
Heritable gene editing is not yet ready to be done in people, says Alta Charo, a bioethicist at the University of Wisconsin‒Madison Law School who cochaired the panel. “We are not trying to greenlight heritable germline editing. We’re trying to find that limited set of circumstances where its use is justified by a compelling need and its application is limited to that compelling need,” says Charo. “We’re giving it a yellow light.”

National Academies reports carry no legislative weight, but do often influence policy decisions in the United States and abroad. It will be up to Congress, regulatory agencies such as the U.S. Food and Drug Administration, and state and local governments to implement the recommendations.

Supporters of new genetic engineering technologies hailed the decision.

“It looks like the possibility of eliminating some genetic diseases is now more than a theoretical option,” says Sean Tipton, a spokesman for the American Society for Reproductive Medicine in Washington, D.C. “That’s what this sets up.” Diseases such as cystic fibrosis and Huntington’s, which are caused by mutations in single genes, could someday be corrected by gene editing. More complex diseases or disorders caused by changes in multiple genes, such as autism or schizophrenia, probably would not be the focus of genome editing.

Others worry that allowing any tinkering with the germline will inevitably lead to “designer babies” and other social ills. It raises fears of stigmatization of people with disabilities, exacerbation of inequalities between people who can afford such therapies and those who can’t, and even a new kind of eugenics, critics say.
“Once you approve any form of human germline modification you really open the door to all forms,” says Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, Calif.

Panelist Jeffrey Kahn, a bioethicist at Johns Hopkins University, says the door to heritable gene therapy remains closed until stringent requirements can be met. “It’s frankly more of a knock on the door,” he said at the public presentation of the report.

The report also changes the debate from whether to allow germline editing to instead focus on the line between therapy and enhancement, Darnovsky says. “I’m feeling very unsettled and disappointed by what they are recommending.”

Several clinical trials in the United States, China and other countries are already under way to do gene editing in people who have cancer or other diseases. But those therapies do not involve altering germline cells; instead they fix defects or make alterations to DNA in other body, or “somatic,” cells. The panel recommended that such somatic cell therapies should also be restricted to treating diseases, not allowing enhancements.

Researchers in the United Kingdom, Sweden and China have already done gene editing on early human embryos in the lab. Recent clinical trials in Mexico and Ukraine to produce “three-parent babies” are also seen as altering the germline because such children carry a small amount of DNA from an egg donor (SN Online: 10/18/16). But those children don’t have modifications of their nuclear DNA, where the genetic instructions that determine traits are stored.

Currently, researchers in the United States are effectively banned from conducting clinical trials that would produce heritable changes in the human genome, either by gene editing or making three-parent babies. The new recommendations could pave the way to allow such experiments.

But the panel lays out a number of hurdles that must be cleared before germline editing could move forward, ones that may be impossible to overcome, says Nita Farahany, a bioethicist at Duke Law School in Durham, N.C. “Some people could read into the stringency of the requirements to think that the benefits could never outweigh the risks,” she says.

One hurdle is a requirement to follow multiple generations of children who have gotten gene editing to determine whether the therapy has consequences for future generations. Researchers would never be able to guarantee that they could conduct such long-term studies, Farahany says. “You can’t bind your children and grandchildren to agree to be tracked by such studies.”

Distinctions between therapies and enhancements are also vague. Researchers may not be able to convincingly draw lines between them, says George Church, a Harvard University geneticist who has developed CRISPR/Cas9 for a variety of purposes. Virtually everything medicine has accomplished could be considered as enhancing human life, he says. “Vaccines are advancements over our ancestors. If you could tell our ancestors they could walk into a smallpox ward and not even worry about it, that would be a superpower.”

But the new technology may make it harder to enhance humans than drugs do, says Charo. Gene-editing technologies are so precise and specific that someone who does not carry a disease-causing mutation would probably not benefit from the technology, she says.

Anesthesia for youngsters is a tricky calculation

If your young child is facing ear tubes, an MRI or even extensive dental work, you’ve probably got a lot of concerns. One of them may be about whether the drugs used to render your child briefly unconscious can permanently harm his brain. Here’s the frustrating answer: No one knows.

“It’s a tough conundrum for parents of kids who need procedures,” says pediatric anesthesiologist Mary Ellen McCann, a pediatric anesthesiologist at Boston Children’s Hospital. “Everything has risks and benefits,” but in this case, the decision to go ahead with surgery is made more difficult by an incomplete understanding of anesthesia’s risks for babies and young children. Some studies suggest that single, short exposures to anesthesia aren’t dangerous. Still, scientists and doctors say that we desperately need more data before we really understand what anesthesia does to developing brains.

It helps to know this nonanswer comes with a lot of baggage, a sign that a lot of very smart and committed people are trying to answer the question. In December, the FDA issued a drug safety communication about anesthetics that sounded alarming, beginning with a warning that “repeated or lengthy use of general anesthetic and sedation drugs during surgeries or procedures in children younger than 3 years or in pregnant women during their third trimester may affect the development of children’s brains.” FDA recommended more conversations between parents and doctors, in the hopes of delaying surgeries that can safely wait, and the amount of anesthesia exposure in this potentially vulnerable population.

The trouble with that statement, though, is that it raises concerns without answering them, says pediatric anesthesiologist Dean Andropoulos of Texas Children’s Hospital in Houston. And that concern might lead to worse outcomes for their youngest patients. “Until reassuring new information from well-designed clinical trials is available, we are concerned that the FDA warning will cause delays for necessary surgical and diagnostic procedures that require anesthesia, resulting in adverse outcomes for patients,” Andropoulos and a colleague wrote February 8 in a New England Journal of Medicine perspective article.

By and large, the surgeries done in young children have good reasons. Surgery for serious heart disease and other life-threatening conditions can’t wait. Ear tubes need to be put in so that a child can hear and get auditory input that’s required early in life for normal language skills. Likewise, certain kinds of eye surgery and cleft palate repairs all lead to better developmental outcomes if done early.

That doesn’t leave many surgeries that can be put off. “The things that can be delayed are few and far between,” Andropoulos says. That’s why the FDA’s recent drug safety communication might cause extra parental worry about surgeries that ought to be done.

Scientists have lots of data showing that anesthetic drugs can cause long-lasting damage in a variety of species, from roundworms to rats to nonhuman primates. Anesthetics are “like any toxin,” says Andrew Davidson, an anesthesiologist at the Murdoch Childrens Research Center in Melbourne, Australia. “The more you have, the worse it is.”
Yet Davidson and others have uncovered some reassuring news for parents. Quick, single exposures to anesthesia, about an hour or less, don’t seem dangerous.

Davidson, McCann and colleagues recently compared children who, as babies, had undergone hernia repair surgery. Of these babies, 359 had brief general anesthesia and 363 instead received local anesthesia. At age 2, the children showed no differences in mental abilities, the researchers reported last year in The Lancet. That trial, called the GAS study, was particularly well-done because unlike many other studies of this question, babies were randomly assigned to receive either general or local anesthesia. And the experiment isn’t over yet. Scientists will test the children again at age 5, when it will be easier to test more complex forms of thinking.

More encouraging news came from the PANDA study, which tracked over 100 children who had received a short dose of anesthesia (the median was 80 minutes) when they were younger than 3. When those same kids were 8 to 15 years old, their IQs and most other thinking skills were similar to their healthy siblings who had not received anesthesia when they were young.

Along with the GAS results, the PANDA study, published June 7 in the Journal of the American Medical Association, offers some reassurance to parents whose child might need surgery. “If it’s a short procedure, you don’t have to worry about it,” Davidson says.

For now, doctors are making good efforts to talk through these complex questions with parents as they make medical decisions. “We face this issue essentially every day,” Andropoulos says, and at his institute, the FDA guidelines prompted even more conversations. Parents are largely appreciative of having these talks, he says. And hopefully scientists will soon have something more to tell parents about what Andropoulos calls “the most important problem we face in pediatric anesthesia.”

See how bacterial blood infections in young kids plummeted after vaccines

To celebrate birthdays, my 2- and 4-year-old party animals got vaccinated. Measles, mumps, rubella, chicken pox, diphtheria, tetanus and whooping cough for the older one (thankfully combined into just two shots), and hepatitis A for the younger.

Funnily enough, there were no tears. Just before the shots, we were talking about the tiny bits of harmless germs that would now be inside their bodies, teaching their immune systems how to fight off the harmful germs and keep their bodies healthy. I suspect my girls got caught up in the excitement and forgot to be scared.

As I watched the vaccine needles go in, I was grateful for these medical marvels that clearly save lives. Yet the topic has become fraught for worried parents who want to keep their kids healthy. Celebrities, politicians and even some pediatricians argue that children today get too many vaccines too quickly, with potentially dangerous additives. Those fears have led to reductions in the number of kids who are vaccinated, and along with it, a resurgence of measles and other diseases that were previously kept in check.

Doctors and scientists try to reduce those fears with good, hard data that show vaccines are absolutely some of the safest and most important tools we have to keep children healthy. (Here’s a handy list of papers if you’d like to dig deeper.) A study published online March 10 in Pediatrics shows a particularly compelling piece of data on the impact of vaccines.

In 2000, doctors began using a vaccine called PCV7, which protected children against seven kinds of Streptococcus pneumoniae bacteria. PCV13 came along in 2010, adding six more types of bacteria to the protective roster. These bacteria can cause many different illnesses such as ear infections, meningitis and blood infections called bacteremia. In young children, these infections can sometimes be quite dangerous (and hard to diagnose).
Medical records that span these pre- and post-vaccine time periods, kept by Kaiser Permanente Northern California, offered a chance to see these pneumococcal vaccinations in action. Before the vaccine existed, 74.5 of 100,000 kids ages 3 months to 36 months got pneumococcal bacteremia. After PCV13, that number had plummeted to 3.5 per 100,000. That’s a 95.3 percent reduction.

This plunge is striking, says study coauthor Tara Greenhow, a pediatric infectious disease specialist at Kaiser Permanente Northern California in San Francisco. Along with earlier results, the new study shows that pneumococcal vaccines are highly effective, she says.

As you check out the graph, pay attention to the data points you don’t see. Those are the babies and toddlers who didn’t end up sick, thanks to a vaccine.

How Pluto’s haze could explain its red spots

Pluto may get its smattering of red spots from the fallout of its hazy blue skies, researchers say.

Haze particles from the dwarf planet’s atmosphere settle onto all of Pluto’s surfaces. But some regions may become redder and darker than others because parts of the atmosphere collapse, exposing those spots to more surface-darkening radiation from space, researchers report March 22 at the Lunar and Planetary Science Conference in The Woodlands, Texas.

“The atmospheric haze on Pluto was a spectacular surprise,” says NASA New Horizons mission scientist Andrew Cheng, a physicist at Johns Hopkins University. When the New Horizons spacecraft flew past Pluto in 2015, scientists weren’t expecting to see haze reaching at least 200 kilometers above the dwarf planet’s surface; nor were they expecting to see the haze divided into about 20 delicate and distinct layers (SN Online: 10/15/15).
These discoveries led researchers to suspect that the layers formed as a result of weak winds blowing across Pluto’s surface and over its mountains. Cheng and colleagues describe how the winds would shape the haze layers in a paper accepted in Icarus and posted online February 24 at arXiv.org. The team also explains how the atmosphere may affect the color of the dwarf planet’s surface features.
“Haze particles continually fall out onto the surface and rapidly build up,” Cheng says. This process should effectively “paint” the entire surface a uniform color — but Pluto isn’t a single color. It has strikingly bright and dark terrains, with some of the highest contrast found in the solar system. These dark and light regions form because portions of Pluto’s atmosphere periodically collapse, with air freezing and falling onto the dwarf planet’s surface, he and colleagues suggest.
When a section of the atmosphere collapses, parts of the surface are exposed directly to radiation from space, which would darken the surface particles there, Cheng explains. The richness of the reds, the team says, cannot be explained without some kind of collapse of the atmosphere, which does eventually redevelop.

Observations from NASA’s Kepler spacecraft also support the idea that Pluto’s atmosphere collapses. In fact, as Pluto moves away from the sun, most, if not all, of its atmosphere may collapse onto the dwarf planet’s surface, reported Carey Lisse, also of Johns Hopkins University, at the conference.
Exactly how much of Pluto’s atmosphere freezes out during its year, which lasts for 248 Earth years, isn’t clear. But that is currently being monitored, says Timothy Dowling, an atmospheric scientist at the University of Louisville in Kentucky, who was not involved in the new work. Pluto, he notes, won’t complete the first lap that humans have watched it make around the sun until 2178.

Spray-on mosquito repellents are more effective than other devices

Mosquitoes are more than an itchy nuisance. They can carry serious diseases, including Zika, West Nile, yellow fever and chikungunya. Now after testing 11 types of mosquito repellents, researchers say they’ve identified the products most effective at warding off the bloodsuckers.

Spray-on repellents with DEET or a refined tree extract called oil of lemon eucalyptus are most likely to keep you bite-free, the scientists report online February 16 in the Journal of Insect Science. The OFF! Clip-On repellent, which puffs out a vapor of the chemical metofluthrin, killed every mosquito in the cage. But Hansen says the mosquitoes couldn’t escape, so they probably got a higher dose than they would in a natural setting.
Other tested repellents such as a citronella candle simply don’t work, says study coauthor Immo Hansen, an insect physiologist at New Mexico State University in Las Cruces.

“There are a whole lot of different products out on the market that are sold as mosquito repellents, and most of them haven’t ever been tested in a scientific setting,” Hansen says.

To evaluate the repellents, the researchers used a person, safely protected from bites, as “bait.” The volunteer sat in a wind tunnel as her alluring scent — and repelling chemicals — were pulled toward a cage of Aedes aegypti mosquitoes.
The three-compartment cage allowed the mosquitoes to move toward or away from the volunteer. After 15 minutes, the researchers determined the portion of mosquitoes that had moved into the compartment closest to the volunteer.
Three deterrents did little to dissuade the insects: bracelets with geraniol oil, a sound machine that buzzes like a dragonfly and a citronella candle (which appeared to slightly attract the mosquitoes). Burning a candle releases carbon dioxide, which might have drawn the mosquitoes, which home in on a human meal by sensing exhaled CO2 (SN: 3/18/17, p. 10).

Repellents face-off
Researchers measured attraction rates of A. aegypti mosquitoes to a person one meter or three meters away who was wearing or seated next to the repellent. Attraction rates are the percentage of total mosquitoes, averaged over four tests, that flew toward the person.
These repellents were not significantly different from the no-repellent control: bracelets (Mosquito-NO!, Invisaband, Mosquitavert), Cutter Citro Guard candle and Personal Sonic Mosquito Repeller.

Competing ideas abound for how Earth got its moon

The moon’s origin story does not add up. Most scientists think that the moon formed in the earliest days of the solar system, around 4.5 billion years ago, when a Mars-sized protoplanet called Theia whacked into the young Earth. The collision sent debris from both worlds hurling into orbit, where the rubble eventually mingled and combined to form the moon.

If that happened, scientists expect that Theia’s contribution would give the moon a different composition from Earth’s. Yet studies of lunar rocks show that Earth and its moon are compositionally identical. That fact throws a wrench into the planet-on-planet impact narrative.
Researchers have been exploring other scenarios. Maybe the Theia impact never happened (there’s no direct evidence that the budding planet ever existed). Instead of a single colossal collision, scientists have proposed that a string of impacts created miniature moons largely from terrestrial material. Those mini moons merged over time to form one big moon.

“Multiple impacts just make more sense,” says planetary scientist Raluca Rufu of the Weizmann Institute of Science in Rehovot, Israel. “You don’t need this one special impactor to form the moon.”

But Theia shouldn’t be left on the cutting room floor just yet. Earth and Theia were built largely from the same kind of material, new research suggests, and so had similar compositions. There is no sign of “other” material on the moon, this perspective holds, because nothing about Theia was different.

“I’m absolutely on the fence between these two opposing ideas,” says UCLA cosmochemist Edward Young. Determining which story is correct is going to take more research. But the answer will offer profound insights into the evolution of the early solar system, Young says.
The moon is an oddball. Most of the solar system’s moons are way out among the gas giant planets. The only other terrestrial planet with orbiting satellites is Mars. Its moons, Phobos and Deimos, are small, and the prevailing explanation says they were probably asteroids captured by the Red Planet’s gravity. Earth’s moon is too big for that scenario. If the moon had come in from elsewhere, asteroid-like, it would probably have crashed into Earth or pulled off into space. An alternate explanation dating from the 1800s suggested that moon-forming material flew off of a fast-spinning young Earth like children tossed from an out-of-control merry-go-round. That idea fell out of favor, though, when scientists calculated that the spin speeds required were impossibly fast.
In the mid-1970s, planetary scientists proposed the giant-impact hypothesis and the mysterious planet-sized impactor (named Theia in 2000 for the Greek deity who was mother of the moon goddess Selene). The notion made sense given that the early solar system was like a game of cosmic billiards, with giant space rocks frequently colliding.

A 2001 study of lunar rocks collected during the Apollo missions cast doubt on the giant-impact hypothesis. The research showed that the Earth and moon had surprising similarities. To determine a rock’s origin, scientists measure the relative abundance of oxygen isotopes, which act something like finger-prints at a crime scene. Rocks from Earth and its moon, the scientists found, had seemingly identical mixes of oxygen isotopes. That didn’t make sense if much of the moon’s material came from Theia, not Earth. Using impact simulations, Rufu and colleagues recently estimated that the chance of a Theia collision yielding an Earthlike lunar composition is very slim.

Studies of other elements in Apollo rocks, such as titanium and zirconium, also suggest that the Earth and moon originated from the same material. Young and colleagues recently repeated the oxygen isotope measurements with the latest techniques, hunting for even the slightest difference between Earth and the moon. In January 2016, the team published the results in Science. “We measured the oxygen to the highest precision available,” Young says, “and, gosh, the Earth and moon still look identical.”
Some scientists have built simulations of a giant Theia impact that fashion a moon made mostly from terrestrial material. But the scenarios struggle to match the modern positions and movements of the Earth-moon system.

It’s time to think outside the giant-impact box, some scientists argue. Not one but many impacts contributed to the moon’s formation, Rufu and colleagues proposed January 9 in Nature Geoscience. The moon, they say, has an Earthlike composition because most of the material flung into orbit from these impacts came from Earth.

Mini-moon merger
The multi-impact hypothesis was first put forward in 1989, though scientists at the time didn’t have the computer power to run the simulations that could support it. Rufu and colleagues recently revisited the proposal with computer simulations of multiple impactors, each about a hundredth to a tenth of Earth’s mass, smacking into the early Earth.

Any impactors that were direct hits would have transferred lots of energy into the Earth, excavating terrestrial material into space. Debris from each impact combined over centuries to form a small moon, the simulations show. As more impacts rocked Earth over tens of millions of years, more moons formed. Gravity pulled the moons together, combining them. Over roughly 100 million years, according to this scenario, around 20 mini moons ultimately merged to form one mighty moon (SN Online: 1/9/17).
The multimoon explanation yields the right lunar mix in simulations roughly 20 percent of the time, better than the 1 to 2 percent for the giant-impact hypothesis, the researchers note. “The biggest takeaway is that you cannot explain everything with one shot,” Rufu says.

Planetary scientist Robin Canup finds the scenario convincing. “To me, this appears to be a real contender alongside the one big impactor hypothesis,” says Canup, of the Southwest Research Institute in Boulder, Colo.

Don’t discount Theia
But the Theia hypothesis has recently found fresh support. The odds of Theia resembling Earth’s composition enough to yield an Earthlike moon may be a lot higher than originally thought, new chemical analyses suggest. Most of the material that makes up Earth came from the same source as a type of meteorite called enstatite chondrites, planetary scientist Nicolas Dauphas of the University of Chicago reported January 26 in Nature.

Just as with oxygen, the isotopic mix of various other elements in Earth’s rocks serves as a fingerprint of the rocks’ origins. Some of these elements are iron-lovers, such as ruthenium, which quickly sink toward Earth’s iron-rich core (SN: 8/6/16, p. 22). Any ruthenium found close to Earth’s surface, in the mantle, probably arrived late in Earth’s development. Iron-indifferent elements like calcium and titanium don’t sink to the core; they stay in the mantle. Their isotopes record what went into Earth’s assembly over a much longer period of time. By looking at the iron-lovers and iron-indifferent elements together, Dauphas created a timeline of what types of space rocks added to Earth’s mass and when.
A mix of different rocks, including some resembling enstatite chondrite meteorites, supplied the first 60 percent of Earth’s mass, Dauphas says. The remaining balance came almost exclusively from the meteorites’ precursors. In total, around three-quarters of Earth’s mass came from the same material as enstatite chondrites, Dauphas estimates. If Theia formed at around the same distance from the sun as Earth, then it primarily formed from the same material, and consequently had a similar isotopic composition. So if the moon formed largely from Theia, it makes sense that lunar rocks would have a similar composition to Earth, too.
“Most of the problem is solved, in my opinion, if you admit that the great impactor’s material was no different than that of the [early] Earth,” says cosmochemist Marc Javoy at the Institute of Earth Physics of Paris. “It’s the simplest hypothesis” and would mean that the material gobbled up by budding planets in the inner solar system was fairly uniform in composition, offering insight into the arrangement of material that built the solar system.

The notion that Earth is made from the same material as enstatite chondrites “doesn’t make many people happy,” says geochemist Richard Carlson of the Carnegie Institution for Science in Washington, D.C. The isotopes in Earth’s mantle and the meteorites may match, but the relative abundance of the elements themselves do not, Carlson wrote in a commentary in the Jan. 26 Nature. An additional step in the process is needed to explain this compositional mismatch, he says, such as some of the element silicon getting stashed away in Earth’s core.

“What we have now are a lot of new ideas, and now we need to test them,” says Sarah Stewart, a planetary scientist at the University of California, Davis.

One recently proposed test for the moon’s formation is based on temperature, though it seems to be consistent with both origin stories. A new study comparing the moon’s chemistry with glass forged by a nuclear blast suggests that temperatures during or just after the moon’s inception reached a sizzling 1400° Celsius. That means any plausible moon-forming scenario must involve such high temperatures, researchers reported February 8 in Science Advances.
High heat causes rocks to leach light isotopes of zinc. The green-tinged glass forged in the heat of the 1945 Trinity nuclear test in New Mexico lack light isotopes of zinc, says study coauthor and geologist James Day of the Scripps Institution of Oceanography in La Jolla, Calif. The same goes for lunar rocks. Such high temperatures during or just after the moon’s formation fit with the giant-impact hypothesis, he says. But Rufu calculates that her multi-impact hypothesis also yields high enough temperatures.
So maybe temperature can’t resolve the debate, but probing the composition of Earth and the moon’s deep interiors could prove the mini-moon explanation right, says Rufu. Without a single giant collision, the interiors of the two worlds may not have been well mixed, she predicts. Dauphas says that measuring the compositions of other planets could lend credence to his Earthlike Theia proposal. Mercury and Venus would also have formed largely from the same kind of material as Earth and therefore also have Earthlike compositions, he says. Future studies of the solar system’s inhabitants could confirm or rule out these predictions, but that will require a new chapter of exploration.