Painkillers in the opium family may actually make pain last longer. Morphine treatment after a nerve injury doubled the duration of pain in rats, scientists report the week of May 30 in the Proceedings of the National Academy of Sciences.
The results raise the troubling prospect that in addition to having unpleasant side effects and addictive potential, opioids such as OxyContin and Vicodin could actually extend some types of pain. If a similar effect is found in people, “it suggests that the treatment is actually contributing to the problem,” says study coauthor Peter Grace, a neuroscientist at the University of Colorado Boulder. Scientists have known that opioid-based drugs can cause heightened sensitivity to pain for some people, a condition called opioid-induced hyperalgesia. The new study shows that the effects linger weeks after use of the drugs is stopped. Male rats underwent surgery in which their sciatic nerves, which run down the hind legs, were squeezed with a stitch — a constriction that causes pain afterward. Ten days after surgery, rats received a five-day course of either morphine or saline.
Rats that didn’t receive morphine took about four weeks to start recovering, showing less sensitivity to a poke. Rats that got morphine took about eight weeks to show improvements — double the time. “That’s far bigger than we had anticipated,” Grace says. “We were definitely surprised by that.”
These experiments were done with male rats, but unpublished data indicate that morphine extends pain even longer in female rats, Grace says, results that fit with what’s known about differences in how males and females experience pain.
Longer-lasting pain in the rats came courtesy of an inflammatory response in the spinal cord. The immune system sees morphine as a threat, the researchers suspect, and responds by revving up inflammation through specialized cells called microglia. Experiments that shut down this process in microglia shortened the duration of the pain.
Many questions remain. Scientists don’t yet know if a similar immune reaction happens in people. Nor is it known whether all opioid-based painkillers would behave like morphine. Understanding the details of how the process works has important implications for doctors, many of whom may be unaware of opioids’ complex relationship with pain, says internal medicine physician Jonathan Chen of Stanford University School of Medicine. Clarity on how opioids influence pain could change doctors’ prescribing habits and encourage the search for better pain treatments, he says.
Grace points out that the experiments were done in genetically similar rats, and that people may have more varied responses to opioids. That variability might mean that not everyone would be at risk for such long-lasting pain, he says. “But clearly these data suggest that there may be a subset of people who might be in trouble.”
SAN DIEGO — The long-standing mystery of the Milky Way’s missing satellite galaxies has a credible culprit, new research suggests. Supernovas, the vigorous explosions of massive stars, might have shoved much of the matter surrounding our galaxy deep into space, preventing a horde of tiny companion galaxies from forming in the first place.
Millions of teeny galaxies should be buzzing around the Milky Way, according to theories about how galaxies evolve, but observations have turned up only a few dozen (SN: 9/19/15, p. 6). And the brightest of those that have been found are lightweights compared with what theorists expect to find. But new computer simulations designed to track the growth of galaxies down to the level of individual stars reveal the critical role that supernovas might play in resolving these conundrums. Philip Hopkins, an astrophysicist at Caltech, presented the results June 13 during a news briefing at a meeting of the American Astronomical Society.
“Galaxies don’t just form stars and sit there,” Hopkins said. “If you [add] up all the energy that supernovae emitted during a galaxy’s lifetime, it’s greater than the gravitational energy holding the galaxy together. You cannot ignore it.”
Simulations are typically limited by computing power, and efforts to simulate galaxy evolution have to brush over some details. For instance, rather than capture everything that’s going on in a galaxy, simulations slap on the additive effects of supernovas in an ad hoc fashion. These limitations don’t fully capture all the physics of stellar winds and supernova shocks that ripple through a galaxy.
Hopkins’ simulations grow a galaxy organically within a computer, tracing the evolution of a system such as the Milky Way over 13 billion years. Within a massive virtual blob of dark matter — the elusive substance thought to bind galaxies together — gas collects and fragments into stellar nurseries. Stars are born and die in this digital universe. A volley of life-ending explosions from the most massive of these stars lead to a turbulent galactic history, Hopkins finds.
“As these stars form rapidly in the early universe, they also live briefly and explode and die violently, ejecting material far from the galaxy,” he said. “They’re not just getting rid of gas.” They’re stirring up the dark matter as well, preventing a multitude of satellite galaxies from forming, and whittling away at those few that survive. “It’s not until quite late times … that [the galaxy] settles down and forms what we would call a recognizable galaxy today,” Hopkins said. The idea that stellar tantrums could chip away at the gas and dark matter around a galaxy is not new, says Janice Lee, an astronomer at the Space Telescope Science Institute in Baltimore. But Hopkins’ simulations bring a lot more detail to that story and show that it’s a plausible reason for our galaxy’s satellite shortfall.
Before declaring that the mystery of the missing satellite galaxies is solved, however, astronomers need to run a few more checks against reality, says Lee. There are still assumptions in the calculations about how energy from dying stars interacts with interstellar gas, for example. The precise details of that interaction can affect how many stellar runts versus behemoths form in star clusters.
NASA’s James Webb Space Telescope, scheduled to launch in 2018, could probe star clusters in several relatively nearby galaxies, she says. Those observations could be compared with virtual clusters that appear in the simulations to see how close they match the real universe.
Even Amelia Earhart couldn’t compete with the great frigate bird. She flew nonstop across the United States for 19 hours in 1932; the frigate bird can stay aloft up to two months without landing, a new study finds. The seabird saves energy on transoceanic treks by capitalizing on the large-scale movement patterns of the atmosphere, researchers report in the July 1 Science. By hitching a ride on favorable winds, the bird can spend more time soaring and less time flapping its wings.
“Frigate birds are really an anomaly,” says Scott Shaffer, an ecologist at San Jose State University in California who wasn’t involved in the study. The large seabird spends much of its life over the open ocean. Both juvenile and adult birds undertake nonstop flights lasting weeks or months, the scientists found. Frigate birds can’t land in the water to catch a meal or take a break because their feathers aren’t waterproof, so scientists weren’t sure how the birds made such extreme journeys.
Researchers attached tiny accelerometers, GPS trackers and heart rate monitors to great frigate birds flying from a tiny island near Madagascar. By pooling data collected over several years, the team re-created what the birds were doing minute-by-minute over long flights — everything from how often the birds flapped their wings to when they dived for food. The birds fly more than 400 kilometers, about equivalent to the distance from Boston to Philadelphia, every day. They don’t even stop to refuel, instead scooping up fish while still in flight.
And when frigate birds do take a break, it’s a quick stopover.
“When they land on a small island, you’d expect they’d stay there for several days. But in fact, they just stay there for a couple hours,” says Henri Weimerskirch, a biologist at the French National Center for Scientific Research in Villiers-en-Bois who led the study. “Even the young birds stay in flight almost continually for more than a year.”
Frigate birds need to be energy Scrooges to fly that far. To minimize wing-flapping time, they seek out routes upward-moving air currents that help them glide and soar over the water. For instance, the birds skirt the edge of the doldrums, a windless region near the equator. On either side of the region, consistent winds make for favorable flying conditions. Frigate birds ride a thermal roller coaster underneath the bank of fluffy cumulus clouds frequently found there, soaring up to altitudes of 600 meters.
Airplanes tend to avoid flying through cumulus clouds because they cause turbulence. So the researchers were surprised to find that frigate birds sometimes use the rising air inside the clouds to get an extra elevation boost — up to nearly 4,000 meters. The extra height means the birds have more time to gradually glide downward before finding a new updraft. That’s an advantage if the clouds (and the helpful air movement patterns they create) are scarce.
It’s not yet clear how frigate birds manage to sleep while on the wing. Weimerskirch suggests they might nap in several-minute bursts while ascending on thermals.
“To me, the most fascinating thing was how incredibly far these frigate birds go in a single flight, and how closely tied those flight patterns are to the long-term average atmospheric condition,” says Curtis Deutsch, an oceanographer at the University of Washington in Seattle. As these atmospheric patterns shift with climate change, frigate birds might change their path, too.
Aging happens to each of us, everywhere, all the time. It is so ever-present and slow that we tend to take little notice of it. Until we do. Those small losses in function and health eventually accumulate into life-changers.
Despite its constancy in our lives, aging remains mysterious on a fundamental level. Scientists still struggle to fully explain its root causes and its myriad effects. Even as discoveries pile up (SN: 12/26/15, p. 20), a clear picture has yet to emerge. Debates continue about whether individual life spans and the problems associated with aging are programmed into our bodies, like ticking time bombs we carry from birth. Others see the process as a buildup of tiny failures, a chaotic and runaway deterioration that steals vim and vigor, if not health and life itself. There is no unified theory of aging. That means that there is no one way to stop it. As longtime aging researcher Caleb Finch put it in an interview with Science News: Aging is still a black box. The issue is an urgent one. The globe’s population has never been older. According to the U.S. Census Bureau’s 2015 An Aging World report, by 2020 the number of people 65 and older worldwide will outnumber children 5 and under for the first time in history. Seniors will make up 22.1 percent of the U.S. population in 2050, and nearly 17 percent globally (a whopping 1.6 billion people), the demographers predict. Worldwide, the 80-and-above crowd will grow from 126 million to 447 million. It’s a population sea change that will have ripple effects on culture, economics, medicine and society.
Scientists working at the frontiers of the field do agree that there are probably many ways to slow aging, Tina Hesman Saey reports in this special issue. Saey sums up current thinking on the actors of aging, as well as a number of intriguing approaches that might well tame aging’s effects. The goal, most agree, is not to find a fountain of youth but the keys to prolonging health.
It turns out that healthy aging in people does occur naturally. It is, however, in the words of Ali Torkamani, “an extremely rare phenotype.” Torkamani leads a genetic study of people 80 and older who are living free of chronic disease, described by Saey in her story. He and his team failed to find a single set of genes that protect these “wellderly.” Instead, the people studied carry a plethora of different genetic variants. They do share a lower risk of heart disease and Alzheimer’s. And, he says, the data hint that gene variants linked to key cognitive areas may be at play, leading him to ask: “Is cognitive health just one of the components of healthy aging? Or is there something about having a healthy brain that protects against other signs of aging?”
Exactly what happens in the brain as we age is a question Laura Sanders takes up in “The mature mind.” An intriguing idea is that the brain begins to lose the specialization that makes it so efficient in its prime, she reports. Further afield, Susan Milius considers a hydra and a weed, examining what these outliers of aging can tell us about how aging evolved and how flexible it truly is. Her answer: Very. The sheer diversity in life cycles and declines gives credence to arguments that while death may come for all of us, a robust old age could well be in the cards for more of us.
Those little piles of dirt that ant colonies leave on the ground are an indication that ants are busy underground. And they’re moving more soil and sediment than you might think. A new study finds that, over a hectare, colonies of Trachymyrmex septentrionalis fungus-gardening ants in Florida can move some 800 kilograms aboveground and another 200 kilograms below in a year.
The question of how much soil and sand ants can move originated not with entomologists but with geologists and archaeologists. These scientists use a technique called optically stimulated luminescence, or OSL, to date layers of sediment. When minerals such as quartz are exposed to the sun, they suck up and store energy. Scientists can use the amount of energy in buried minerals to determine when they last sat on the surface, taking in the sun.
But ants might muck this up. To find out, a group of geologists and archaeologists reached out to Walter Tschinkel, an entomologist at Florida State University. Figuring out how much sand and soil ants dig up and deposit on the surface — called biomantling — is relatively easy, especially if the color of the soil they’re digging up is different from that found on the ground. But tracking movement underground, or bioturbation, is a bit more complicated. Tschinkel and his former student Jon Seal, now an ecologist at the University of Texas at Tyler, turned to an area of the Apalachicola National Forest in Florida dubbed “Ant Heaven” for its abundant and diverse collection of ants. Tschinkel has worked there since the 1970s, and for the last six years, he has been monitoring some 450 colonies of harvester ants, which bring up plenty of sandy soil from underground. But he was also curious about the fungus-gardening ants.
Tschinkel and Seal had already shown that the fungus-gardening ant “is extremely abundant, that it moves a very large amount of soil, and that as the summer warms up, it digs a deeper chamber and deposits that soil in higher chambers without exposing it to light,” Tschinkel says. “In other words, it appeared to do a very large amount of soil mixing of the type [that had been] described in harvester ants.”
No one had ever quantified an ant colony’s subterranean digging before. Tschinkel and Seal started by digging 10 holes a meter deep and filling them with layers of native sand mixed with various colors of art sand — pink, blue, purple or yellow, green and orange, with plain forest sand at the top. Each hole was then topped with a cage, and an ant colony was transferred with the fungus that the ants cultivate like a crop. Throughout the experiment, the researchers collected sand that the ants deposited on the surface and provided the colonies with food for their fungus, including leaves, small flowers and oatmeal. Seven months later, Tschinkel and Seal carefully excavated the nine surviving ant colonies and quantified grains of sand moved from one sand layer to another. The team reports its findings July 8 in PLOS ONE.
By the end of the study, each ant colony had deposited an average of 758 grams of sand on the surface and moved another 153 grams between one colored layer and another underground, mostly upward. The ants dug chambers to farm their fungus, and they sometimes filled them up with sand from deeper layers as they dug new chambers in areas with temperature and humidity best suited for cultivation. With more than a thousand nests per hectare, the ants may be moving about a metric ton of sand each year, covering the surface with 6 centimeters of soil over the course of a millennium, the researchers calculated.
All of this mixing and moving could prove a challenge for geologists and archaeologists relying on OSL. “When ants deposit sand from deeper levels at higher levels (or the reverse), they are mixing sand with different light-emitting capacity, and therefore with different measured ages,” Tschinkel notes. “People who use OSL need to know how much such mixing occurs, and then devise ways of dealing with it.” Now that scientists know that ants could be a problem, they should be able to develop ways to work around the little insects.
That’s the takeaway of a new study of snail fever, or schistosomiasis, a tropical disease that affects more than 250 million people worldwide. It’s caused by a water-borne parasite that reproduces inside some snails. Parasite larvae burrow through people’s skin and can cause infertility, cognitive problems and even cancer. Today, most countries manage the disease with a drug that kills the parasite in human hosts. Some nations also control snail populations to hamstring the parasite’s life cycle, but that’s a less popular approach.
But snail control turns out to be more effective than drugs for curbing snail fever, researchers report July 21 in PLOS Neglected Tropical Diseases. The scientists compared a range of disease management strategies in 83 countries in the last century that included killing snails, using drugs or changing infrastructure (such as sanitation services). Projects using snail control cut disease by over 90 percent; those without it, by less than 40 percent.
The researchers suggest a blend of drug therapy and snail management to eradicate disease in the future.
Pulling consecutive all-nighters makes some brain areas groggier than others. Regions involved with problem solving and concentration become especially sluggish when sleep-deprived, a new study using brain scans reveals. Other areas keep ticking along, appearing to be less affected by a mounting sleep debt.
The results might lead to a better understanding of the rhythmic nature of symptoms in certain psychiatric or neurodegenerative disorders, says study coauthor Derk-Jan Dijk. People with dementia, for instance, can be afflicted with “sundowning,” which worsens their symptoms at the end of the day. More broadly, the findings, published August 12 in Science, document the brain’s response to too little shut-eye. “We’ve shown what shift workers already know,” says Dijk, of the University of Surrey in England. “Being awake at 6 a.m. after a night of no sleep, it isn’t easy. But what wasn’t known was the remarkably different response of these brain areas.”
The research reveals the differing effects of the two major factors that influence when you conk out: the body’s roughly 24-hour circadian clock, which helps keep you awake in the daytime and put you to sleep when it’s dark, and the body’s drive to sleep, which steadily increases the longer you’re awake.
Dijk and collaborators at the University of Liege in Belgium assessed the cognitive function of 33 young adults who went without sleep for 42 hours. Over the course of this sleepless period, the participants performed some simple tasks testing reaction time and memory. The sleepy subjects also underwent 12 brain scans during their ordeal and another scan after 12 hours of recovery sleep. Throughout the study, the researchers also measured participants’ levels of the sleep hormone melatonin, which served as a way to track the hands on their master circadian clocks.
Activity in some brain areas, such as the thalamus, a central hub that connects many other structures, waxed and waned in sync with the circadian clock. But in other areas, especially those in the brain’s outer layer, the effects of this master clock were overridden by the body’s drive to sleep. Brain activity diminished in these regions as sleep debt mounted, the scans showed.
Sleep deprivation also meddled with the participants’ performance on simple tasks, effects influenced both by the mounting sleep debt and the cycles of the master clock. Performance suffered in the night, but improved somewhat during the second day, even after no sleep. While the brain’s circadian clock signal is known to originate in a cluster of nerve cells known as the suprachiasmatic nucleus, it isn’t clear where the drive to sleep comes from, says Charles Czeisler, a sleep expert at Harvard Medical School. The need to sleep might grow as toxic metabolites build up after a day’s worth of brain activity, or be triggered when certain regions run out of fuel.
Sleep drive’s origin is just one of many questions raised by the research, says Czeisler, who says the study “opens up a new era in our understanding of sleep-wake neurobiology.” The approach of tracking activity with brain scans and melatonin measurements might reveal, for example, how a lack of sleep during the teenage years influences brain development.
Such an approach also might lead to the development of a test that reflects the strength of the body’s sleep drive, Czeisler says. That measurement might help clinicians spot chronic sleep deprivation, a health threat that can masquerade as attention-deficit/hyperactivity disorder in children.
Blue whirl Bloo werl n. A swirling flame that appears in fuel floating on the surface of water and glows blue.
An unfortunate mix of electricity and bourbon has led to a new discovery. After lightning hit a Jim Beam warehouse in 2003, a nearby lake was set ablaze when the distilled spirit spilled into the water and ignited. Spiraling tornadoes of fire leapt from the surface. In a laboratory experiment inspired by the conflagration, a team of researchers produced a new, efficiently burning fire tornado, which they named a blue whirl. To re-create the bourbon-fire conditions, the researchers, led by Elaine Oran of the University of Maryland in College Park, ignited liquid fuel floating on a bath of water. They surrounded the blaze with a cylindrical structure that funneled air into the flame to create a vortex with a height of about 60 centimeters. Eventually, the chaotic fire whirl calmed into a blue, cone-shaped flame just a few centimeters tall, the scientists report online August 4 in Proceedings of the National Academy of Sciences.
“Firenadoes” are known to appear in wildfires, when swirling winds and flames combine to form a hellacious, rotating inferno. They burn more efficiently than typical fires, as the whipping winds mix in extra oxygen, which feeds the fire. But the blue whirl is even more efficient; its azure glow indicates complete combustion, which releases little soot, or uncombusted carbon, to the air.
The soot-free blue whirls could be a way of burning off oil spills on water without adding much pollution to the air, the researchers say, if they can find a way to control them in the wild.
Editor’s note: When reporting results from the functional MRI scans of dogs’ brains, left and right were accidentally reversed in all images, the researchers report in a correction posted April 7 in Science. While dogs and most humans use different hemispheres of the brain to process meaning and intonation — instead of the same hemispheres, as was suggested — lead author Attila Andics says the more important finding still stands: Dogs’ brains process different aspects of human speech in different hemispheres. Dogs process speech much like people do, a new study finds. Meaningful words like “good boy” activate the left side of a dog’s brain regardless of tone of voice, while a region on the right side of the brain responds to intonation, scientists report in the Sept. 2 Science.
Similarly, humans process the meanings of words in the left hemisphere of the brain, and interpret intonation in the right hemisphere. That lets people sort out words that convey meaning from random sounds that don’t. But it has been unclear whether language abilities were a prerequisite for that division of brain labor, says neuroscientist Attila Andics of Eötvös Loránd University in Budapest.
Dogs make ideal test subjects for understanding speech processing because of their close connection to humans. “Humans use words towards dogs in their everyday, normal communication, and dogs pay attention to this speech in a way that cats and hamsters don’t,” says Andics. “When we want to understand how an animal processes speech, it’s important that speech be relevant.” Andics and his colleagues trained dogs to lie still for functional MRI scans, which reveal when and where the brain is responding to certain cues. Then the scientists played the dogs recordings of a trainer saying either meaningful praise words like “good boy,” or neutral words like “however,” either in an enthusiastic tone of voice or a neutral one. The dogs showed increased activity in the left sides of their brains in response to the meaningful words, but not the neutral ones. An area on the right side of the brain reacted to the intonation of those words, separating out enthusiasm from indifference.
When the dogs heard praising words in an enthusiastic tone of voice, neural circuits associated with reward became more active. The dogs had the same neurological response to an excited “Good dog!” as they might to being petted or receiving a tasty treat. Praise words or enthusiastic intonation alone didn’t have the same effect.
Humans stand out from other animals in their ability to use language — that is, to manipulate sequences of sounds to convey different meanings. But the new findings suggest that the ability to hear these arbitrary sequences of sound and link them to meaning isn’t a uniquely human ability.
“I love these results, as they point to how well domestication has shaped dogs to use and track the very same cues that we use to make sense of what other people are saying,” says Laurie Santos, a cognitive psychologist at Yale University.
While domestication made dogs more attentive to human speech, humans have been close companions with dogs for only 30,000 years. That’s too quickly for a trait like lateralized speech processing to evolve, Andics thinks. He suspects that some older underlying neural mechanism for processing meaningful sounds is present in other animals, too.
It’s just hard to test in other species, he says — in part because cats don’t take as kindly to being put inside MRI scanners and asked to hold still.
A beautiful but unproved theory of particle physics is withering in the harsh light of data.
For decades, many particle physicists have devoted themselves to the beloved theory, known as supersymmetry. But it’s beginning to seem that the zoo of new particles that the theory predicts —the heavier cousins of known particles — may live only in physicists’ imaginations. Or if such particles, known as superpartners, do exist, they’re not what physicists expected.
New data from the world’s most powerful particle accelerator — the Large Hadron Collider, now operating at higher energies than ever before — show no traces of superpartners. And so the theory’s most fervent supporters have begun to pay for their overconfidence — in the form of expensive bottles of brandy. On August 22, a group of physicists who wagered that the LHC would quickly confirm the theory settled a 16-year-old bet. In a session at a physics meeting in Copenhagen, theoretical physicist Nima Arkani-Hamed ponied up, presenting a bottle of cognac to physicists who bet that the new particles would be slow to materialize, or might not exist at all. Whether their pet theories are right or wrong, many theoretical physicists are simply excited that the new LHC data can finally anchor their ideas to reality. “Of course, in the end, nature is going to tell us what’s true,” says theoretical physicist Yonit Hochberg of Cornell University, who spoke on a panel at the meeting.
Supersymmetry is not ruled out by the new data, but if the new particles exist, they must be heavier than scientists expected. “Right now, nature is telling us that if supersymmetry is the right theory, then it doesn’t look exactly like we thought it would,” Hochberg says. Since June 2015, the LHC, at the European particle physics lab CERN near Geneva, has been smashing protons together at higher energies than ever before: 13 trillion electron volts. Physicists had been eager to see if new particles would pop out at these energies. But the results have agreed overwhelmingly with the standard model, the established theory that describes the known particles and their interactions.
It’s a triumph for the standard model, but a letdown for physicists who hope to expose cracks in that theory. “There is a low-level panic,” says theoretical physicist Matthew Buckley of Rutgers University in Piscataway, N.J. “We had a long time without data, and during that time many theorists thought up very compelling ideas. And those ideas have turned out to be wrong.”
Physicists know that the standard model must break down somewhere. It doesn’t explain why the universe contains more matter than antimatter, and it fails to pinpoint the origins of dark matter and dark energy, which make up 95 percent of the matter and energy in the cosmos.
Even the crowning achievement of the LHC, the discovery of the Higgs boson in 2012 (SN: 7/28/2012, p. 5), hints at the sickness within the standard model. The mass of the Higgs boson, at 125 billion electron volts, is vastly smaller than theory naïvely predicts. That mass, physicists worry, is not “natural” — the factors that contribute to the Higgs mass must be finely tuned to cancel each other out and keep the mass small (SN Online: 10/22/13).
Among the many theories that attempt to fix the standard model’s woes, supersymmetry is the most celebrated. “Supersymmetry was this dominant paradigm for 30 years because it was so beautiful, and it was so perfect,” says theoretical physicist Nathaniel Craig of the University of California, Santa Barbara. But supersymmetry is becoming less appealing as the LHC collects more collisions with no signs of superpartners.
Supersymmetry solves three major problems in physics: It explains why the Higgs is so light; it provides a particle that serves as dark matter; and it implies that the three forces of the standard model (electromagnetism and the weak and strong nuclear forces) unite into one at high energies.
If a simple version of supersymmetry is correct, the LHC probably should have detected superpartners already. As the LHC rules out such particles at ever-higher masses, retaining the appealing properties of supersymmetry requires increasingly convoluted theoretical contortions, stripping the idea of some of the elegance that first persuaded scientists to embrace it. “If supersymmetry exists, it is not my parents’ supersymmetry,” says Buckley. “That kind of means it can’t be the most compelling version.”
Still, many physicists are adopting an attitude of “keep calm and carry on.” They aren’t giving up hope that evidence for the theory — or other new particle physics phenomena — will show up soon. “I am not yet particularly worried,” says theoretical physicist Carlos Wagner of the University of Chicago. “I think it’s too early. We just started this process.” The LHC has delivered only 1 percent of the data it will collect over its lifetime. Hopes of quickly finding new phenomena were too optimistic, Wagner says. Experimental physicists, too, maintain that there is plenty of room for new discoveries. But it could take years to uncover them. “I would be very, very happy if we were able to find some new phenomena, some new state of matter, within the first two or three years” of running the LHC at its boosted energy, Tiziano Camporesi of the LHC’s CMS experiment said during a news conference at the International Conference on High Energy Physics, held in Chicago in August. “That would mean that nature has been kind to us.”
But other LHC scientists admit they had expected new discoveries by now. “The fact that we haven’t seen something, I think, is in general quite surprising to the community,” said Guy Wilkinson, spokesperson for the LHCb experiment. “This isn’t a failure — this is perhaps telling us something.” The lack of new particles forces theoretical physicists to consider new explanations for the mass of the Higgs. To be consistent with data, those explanations can’t create new particles the LHC should already have seen.
Some physicists — particularly those of the younger generations — are ready to move on to new ideas. “I’m personally not attached to supersymmetry,” says David Kaplan of Johns Hopkins University. Kaplan and colleagues recently proposed the “relaxion” hypothesis, which allows the Higgs mass to change — or relax — as the universe evolves. Under this theory, the Higgs mass gets stuck at a small value, never reaching the high mass otherwise predicted.
Another idea, which Craig favors, is a family of theories by the name of “neutral naturalness.” Like supersymmetry, this idea proposes symmetries of nature that solve the problem of the Higgs mass, but it doesn’t predict new particles that should have been seen at the LHC. “The theories, they’re not as beautiful as just simple supersymmetry, but they’re motivated by data,” Craig says.
One particularly controversial idea is the multiverse hypothesis. There may be innumerable other universes, with different Higgs masses in each. Perhaps humans observe such a light Higgs because a small mass is necessary for heavy elements like carbon to be produced in stars. People might live in a universe with a small Higgs because it’s the only type of universe life can exist in.
It’s possible that physicists’ fears will be realized — the LHC could deliver the Higgs boson and nothing else. Such a result would leave theoretical physicists with few clues to work with. Still, says Hochberg, “if that’s the case, we’ll still be learning something very deep about nature.”