Aging happens to each of us, everywhere, all the time. It is so ever-present and slow that we tend to take little notice of it. Until we do. Those small losses in function and health eventually accumulate into life-changers.
Despite its constancy in our lives, aging remains mysterious on a fundamental level. Scientists still struggle to fully explain its root causes and its myriad effects. Even as discoveries pile up (SN: 12/26/15, p. 20), a clear picture has yet to emerge. Debates continue about whether individual life spans and the problems associated with aging are programmed into our bodies, like ticking time bombs we carry from birth. Others see the process as a buildup of tiny failures, a chaotic and runaway deterioration that steals vim and vigor, if not health and life itself. There is no unified theory of aging. That means that there is no one way to stop it. As longtime aging researcher Caleb Finch put it in an interview with Science News: Aging is still a black box. The issue is an urgent one. The globe’s population has never been older. According to the U.S. Census Bureau’s 2015 An Aging World report, by 2020 the number of people 65 and older worldwide will outnumber children 5 and under for the first time in history. Seniors will make up 22.1 percent of the U.S. population in 2050, and nearly 17 percent globally (a whopping 1.6 billion people), the demographers predict. Worldwide, the 80-and-above crowd will grow from 126 million to 447 million. It’s a population sea change that will have ripple effects on culture, economics, medicine and society.
Scientists working at the frontiers of the field do agree that there are probably many ways to slow aging, Tina Hesman Saey reports in this special issue. Saey sums up current thinking on the actors of aging, as well as a number of intriguing approaches that might well tame aging’s effects. The goal, most agree, is not to find a fountain of youth but the keys to prolonging health.
It turns out that healthy aging in people does occur naturally. It is, however, in the words of Ali Torkamani, “an extremely rare phenotype.” Torkamani leads a genetic study of people 80 and older who are living free of chronic disease, described by Saey in her story. He and his team failed to find a single set of genes that protect these “wellderly.” Instead, the people studied carry a plethora of different genetic variants. They do share a lower risk of heart disease and Alzheimer’s. And, he says, the data hint that gene variants linked to key cognitive areas may be at play, leading him to ask: “Is cognitive health just one of the components of healthy aging? Or is there something about having a healthy brain that protects against other signs of aging?”
Exactly what happens in the brain as we age is a question Laura Sanders takes up in “The mature mind.” An intriguing idea is that the brain begins to lose the specialization that makes it so efficient in its prime, she reports. Further afield, Susan Milius considers a hydra and a weed, examining what these outliers of aging can tell us about how aging evolved and how flexible it truly is. Her answer: Very. The sheer diversity in life cycles and declines gives credence to arguments that while death may come for all of us, a robust old age could well be in the cards for more of us.
Those little piles of dirt that ant colonies leave on the ground are an indication that ants are busy underground. And they’re moving more soil and sediment than you might think. A new study finds that, over a hectare, colonies of Trachymyrmex septentrionalis fungus-gardening ants in Florida can move some 800 kilograms aboveground and another 200 kilograms below in a year.
The question of how much soil and sand ants can move originated not with entomologists but with geologists and archaeologists. These scientists use a technique called optically stimulated luminescence, or OSL, to date layers of sediment. When minerals such as quartz are exposed to the sun, they suck up and store energy. Scientists can use the amount of energy in buried minerals to determine when they last sat on the surface, taking in the sun.
But ants might muck this up. To find out, a group of geologists and archaeologists reached out to Walter Tschinkel, an entomologist at Florida State University. Figuring out how much sand and soil ants dig up and deposit on the surface — called biomantling — is relatively easy, especially if the color of the soil they’re digging up is different from that found on the ground. But tracking movement underground, or bioturbation, is a bit more complicated. Tschinkel and his former student Jon Seal, now an ecologist at the University of Texas at Tyler, turned to an area of the Apalachicola National Forest in Florida dubbed “Ant Heaven” for its abundant and diverse collection of ants. Tschinkel has worked there since the 1970s, and for the last six years, he has been monitoring some 450 colonies of harvester ants, which bring up plenty of sandy soil from underground. But he was also curious about the fungus-gardening ants.
Tschinkel and Seal had already shown that the fungus-gardening ant “is extremely abundant, that it moves a very large amount of soil, and that as the summer warms up, it digs a deeper chamber and deposits that soil in higher chambers without exposing it to light,” Tschinkel says. “In other words, it appeared to do a very large amount of soil mixing of the type [that had been] described in harvester ants.”
No one had ever quantified an ant colony’s subterranean digging before. Tschinkel and Seal started by digging 10 holes a meter deep and filling them with layers of native sand mixed with various colors of art sand — pink, blue, purple or yellow, green and orange, with plain forest sand at the top. Each hole was then topped with a cage, and an ant colony was transferred with the fungus that the ants cultivate like a crop. Throughout the experiment, the researchers collected sand that the ants deposited on the surface and provided the colonies with food for their fungus, including leaves, small flowers and oatmeal. Seven months later, Tschinkel and Seal carefully excavated the nine surviving ant colonies and quantified grains of sand moved from one sand layer to another. The team reports its findings July 8 in PLOS ONE.
By the end of the study, each ant colony had deposited an average of 758 grams of sand on the surface and moved another 153 grams between one colored layer and another underground, mostly upward. The ants dug chambers to farm their fungus, and they sometimes filled them up with sand from deeper layers as they dug new chambers in areas with temperature and humidity best suited for cultivation. With more than a thousand nests per hectare, the ants may be moving about a metric ton of sand each year, covering the surface with 6 centimeters of soil over the course of a millennium, the researchers calculated.
All of this mixing and moving could prove a challenge for geologists and archaeologists relying on OSL. “When ants deposit sand from deeper levels at higher levels (or the reverse), they are mixing sand with different light-emitting capacity, and therefore with different measured ages,” Tschinkel notes. “People who use OSL need to know how much such mixing occurs, and then devise ways of dealing with it.” Now that scientists know that ants could be a problem, they should be able to develop ways to work around the little insects.
That’s the takeaway of a new study of snail fever, or schistosomiasis, a tropical disease that affects more than 250 million people worldwide. It’s caused by a water-borne parasite that reproduces inside some snails. Parasite larvae burrow through people’s skin and can cause infertility, cognitive problems and even cancer. Today, most countries manage the disease with a drug that kills the parasite in human hosts. Some nations also control snail populations to hamstring the parasite’s life cycle, but that’s a less popular approach.
But snail control turns out to be more effective than drugs for curbing snail fever, researchers report July 21 in PLOS Neglected Tropical Diseases. The scientists compared a range of disease management strategies in 83 countries in the last century that included killing snails, using drugs or changing infrastructure (such as sanitation services). Projects using snail control cut disease by over 90 percent; those without it, by less than 40 percent.
The researchers suggest a blend of drug therapy and snail management to eradicate disease in the future.
Pollen tainted with neonicotinoid pesticides could interfere with male honeybee reproduction, a new study finds.
After bee colonies fed on pollen spiked with the pesticides thiamethoxam and clothianidin, male bees, or drones, produced almost 40 percent fewer living sperm than did males from colonies fed clean pollen, researchers report July 27 in Proceedings of the Royal Society B. The concentrations of the pesticides, 4.5 parts per billion and 1.5 parts per billion, respectively, were in the range of what free-living bees encounter when foraging around crops, study coauthor Lars Straub of the University of Bern, Switzerland, says.
Pollinator conservationists have raised concerns that chronic exposure to neonicotinoids widely used on crops is inadvertently weakening honeybee colonies working the fields. The amount of sperm males produce might affect how well a colony sustains itself because young queens mate (with about 15 males on average) during one or two early frenzies and then depend on that stored sperm for the rest of their egg-laying years. The new study is the first to examine neonicotinoid effects on honeybee sperm, Straub says.
Young sunflowers grow better when they track the sun’s daily motion from east to west across the sky. An internal clock helps control the behavior, biologist Stacey Harmer and colleagues report in the Aug. 5 Science.
Depending on the time of day, certain growth genes appear to be activated to different degrees on opposing sides of young sunflowers’ stems. The east side of their stems grow faster during the day, causing the stems to gradually bend from east to west. The west side grows faster at night, reorienting the plants to prepare them for the next morning. “At dawn, they’re already facing east again,” says Harmer, of the University of California, Davis. The behavior helped sunflowers grow bigger, her team found. Young plants continued to grow from east to west each day even when their light source didn’t move. So Harmer and her colleagues concluded that the behavior was influenced by an internal clock like the one that controls human sleep/wake cycles, instead of being solely in response to available light.
That’s probably advantageous, Harmer says, “because you have a system that’s set up to run even if the environment changes transiently.” A cloudy morning doesn’t stop the plants from tracking, for instance.
Contrary to popular belief, mature sunflowers don’t track the sun — they perpetually face east. That’s probably because their stems have stopped growing. But Harmer and her colleagues found an advantage for the fixed orientation, too: Eastern-facing heads get warmer in the sun than westward-facing ones and attract more pollinators.
Pulling consecutive all-nighters makes some brain areas groggier than others. Regions involved with problem solving and concentration become especially sluggish when sleep-deprived, a new study using brain scans reveals. Other areas keep ticking along, appearing to be less affected by a mounting sleep debt.
The results might lead to a better understanding of the rhythmic nature of symptoms in certain psychiatric or neurodegenerative disorders, says study coauthor Derk-Jan Dijk. People with dementia, for instance, can be afflicted with “sundowning,” which worsens their symptoms at the end of the day. More broadly, the findings, published August 12 in Science, document the brain’s response to too little shut-eye. “We’ve shown what shift workers already know,” says Dijk, of the University of Surrey in England. “Being awake at 6 a.m. after a night of no sleep, it isn’t easy. But what wasn’t known was the remarkably different response of these brain areas.”
The research reveals the differing effects of the two major factors that influence when you conk out: the body’s roughly 24-hour circadian clock, which helps keep you awake in the daytime and put you to sleep when it’s dark, and the body’s drive to sleep, which steadily increases the longer you’re awake.
Dijk and collaborators at the University of Liege in Belgium assessed the cognitive function of 33 young adults who went without sleep for 42 hours. Over the course of this sleepless period, the participants performed some simple tasks testing reaction time and memory. The sleepy subjects also underwent 12 brain scans during their ordeal and another scan after 12 hours of recovery sleep. Throughout the study, the researchers also measured participants’ levels of the sleep hormone melatonin, which served as a way to track the hands on their master circadian clocks.
Activity in some brain areas, such as the thalamus, a central hub that connects many other structures, waxed and waned in sync with the circadian clock. But in other areas, especially those in the brain’s outer layer, the effects of this master clock were overridden by the body’s drive to sleep. Brain activity diminished in these regions as sleep debt mounted, the scans showed.
Sleep deprivation also meddled with the participants’ performance on simple tasks, effects influenced both by the mounting sleep debt and the cycles of the master clock. Performance suffered in the night, but improved somewhat during the second day, even after no sleep. While the brain’s circadian clock signal is known to originate in a cluster of nerve cells known as the suprachiasmatic nucleus, it isn’t clear where the drive to sleep comes from, says Charles Czeisler, a sleep expert at Harvard Medical School. The need to sleep might grow as toxic metabolites build up after a day’s worth of brain activity, or be triggered when certain regions run out of fuel.
Sleep drive’s origin is just one of many questions raised by the research, says Czeisler, who says the study “opens up a new era in our understanding of sleep-wake neurobiology.” The approach of tracking activity with brain scans and melatonin measurements might reveal, for example, how a lack of sleep during the teenage years influences brain development.
Such an approach also might lead to the development of a test that reflects the strength of the body’s sleep drive, Czeisler says. That measurement might help clinicians spot chronic sleep deprivation, a health threat that can masquerade as attention-deficit/hyperactivity disorder in children.
Blue whirl Bloo werl n. A swirling flame that appears in fuel floating on the surface of water and glows blue.
An unfortunate mix of electricity and bourbon has led to a new discovery. After lightning hit a Jim Beam warehouse in 2003, a nearby lake was set ablaze when the distilled spirit spilled into the water and ignited. Spiraling tornadoes of fire leapt from the surface. In a laboratory experiment inspired by the conflagration, a team of researchers produced a new, efficiently burning fire tornado, which they named a blue whirl. To re-create the bourbon-fire conditions, the researchers, led by Elaine Oran of the University of Maryland in College Park, ignited liquid fuel floating on a bath of water. They surrounded the blaze with a cylindrical structure that funneled air into the flame to create a vortex with a height of about 60 centimeters. Eventually, the chaotic fire whirl calmed into a blue, cone-shaped flame just a few centimeters tall, the scientists report online August 4 in Proceedings of the National Academy of Sciences.
“Firenadoes” are known to appear in wildfires, when swirling winds and flames combine to form a hellacious, rotating inferno. They burn more efficiently than typical fires, as the whipping winds mix in extra oxygen, which feeds the fire. But the blue whirl is even more efficient; its azure glow indicates complete combustion, which releases little soot, or uncombusted carbon, to the air.
The soot-free blue whirls could be a way of burning off oil spills on water without adding much pollution to the air, the researchers say, if they can find a way to control them in the wild.
Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment.
Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works.
“The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods.
By the end of their experiments, Jonas and Kording had discovered almost nothing.
Their results — or lack thereof — hit a nerve among neuroscientists. When Jonas presented the work last year at a Kavli Foundation workshop held at MIT, the response from the crowd was split. “A bunch of people said, ‘That’s awesome. I had that idea 10 years ago and never got around to doing it,’ ” Jonas says. “And a bunch of people were like, ‘That’s bullshit. You’re taking the analogy way too far. You’re attacking a straw man.’ ” On May 26, Jonas and Kording shared their results with a wider audience by posting a manuscript on the website bioRxiv.org. Bottom line of their report: Some of the best tools used by neuro-scientists turned up plenty of data but failed to reveal anything meaningful about a relatively simple machine. The implications are profound — and discouraging. Current neuro-science methods might not be up for the job when it comes to truly understanding the brain.
The paper “does a great job of articulating something that most thoughtful people believe but haven’t said out loud,” says neuroscientist Anthony Zador of Cold Spring Harbor Laboratory in New York. “Their point is that it’s not clear that the current methods would ever allow us to understand how the brain computes in [a] fundamental way,” he says. “And I don’t necessarily disagree.”
Differences and similarities Critics, however, contend that the analogy of the brain as a computer is flawed. Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, Calif., for instance, calls the comparison “provocative, but misleading.” The brain and the microprocessor are distinct in a huge number of ways. The brain can behave differently in different situations, a variability that adds an element of randomness to its machinations; computers aim to serve up the same response to the same situation every time. And compared with a microprocessor, the brain has an incredible amount of redundancy, with multiple circuits able to step in and compensate when others malfunction.
In microprocessors, the software is distinct from the hardware — any number of programs can run on the same machine. “This is not the case in the brain, where the software is the hardware,” Sejnowski says. And this hardware changes from minute to minute. Unlike the microprocessor’s connections, brain circuits morph every time you learn something new. Synapses grow and connect nerve cells, storing new knowledge.
Brains and microprocessors have very different origins, Sejnowski points out. The human brain has been sculpted over millions of years of evolution to be incredibly specialized, able to spot an angry face at a glance, for instance, or remember a childhood song for years. The 6502, which debuted in 1975, was designed by a small team of humans, who engineered the chip to their exact specifications. The methods for understanding one shouldn’t be expected to work for the other, Sejnowski says.
Yet there are some undeniable similarities. Brains and microprocessors are both built from many small units: 86 billion neurons and 3,510 transistors, respectively. These units can be organized into specialized modules that allow both “organs” to flexibly move information around and hold memories. Those shared traits make the 6502 a legitimate and informative model organism, Jonas and Kording argue. In one experiment, they tested what would happen if they tried to break the 6502 bit by bit. Using a simulation to run their experiments, the researchers systematically knocked out every single transistor one at a time. They wanted to know which transistors were mission-critical to three important “behaviors”: Donkey Kong, Space Invaders and Pitfall. The effort was akin to what neuroscientists call “lesion studies,” which probe how the brain behaves when a certain area is damaged.
The experiment netted 1,565 transistors that could be eliminated without any consequences to the games. But other transistors proved essential. Losing any one of 1,560 transistors made it impossible for the microprocessor to load any of the games.
Big gap Those results are hard to parse into something meaningful. This type of experiment, just as those in human and animal brains, are informative in some ways. But they don’t constitute understanding, Jonas argues. The gulf between knowing that a particular broken transistor can stymie a game and actually understanding how that transistor helps compute is “incredibly vast,” he says.
The transistor “lesion” experiment “gets at the core problem that we are struggling with in neuro-science,” Zador says. “Although we can attribute different brain functions to different brain areas, we don’t actually understand how the brain computes.” Other experiments reported in the study turned up red herrings — results that looked similar to potentially useful brain data, but were ultimately meaningless. Jonas and Kording looked at the average activity of groups of nearby transistors to assess patterns about how the microprocessor works. Neuro-scientists do something similar when they analyze electrical patterns of groups of neurons. In this task, the microprocessor delivered some good-looking data. Oscillations of activity rippled over the microprocessor in patterns that seemed similar to those of the brain. Unfortunately, those signals are irrelevant to how the computer chip actually operates.
Data from other experiments revealed a few finds, including that the microprocessor contains a clock signal and that it switches between reading and writing memory. Yet these are not key insights into how the chip actually handles information, Jonas and Kording write in their paper.
It’s not that analogous experiments on the brain are useless, Jonas says. But he hopes that these examples reveal how big of a challenge it will be to move from experimental results to a true understanding. “We really need to be honest about what we’re going to pull out here.” Jonas says the results should caution against collecting big datasets in the absence of theories that can help guide experiments and that can be verified or refuted. For the microprocessor, the researchers had a lot of data, yet still couldn’t separate the informative wheat from the distracting chaff. The results “suggest that we need to try and push a little bit more toward testable theories,” he says.
That’s not to say that big datasets are useless, he is quick to point out. Zador agrees. Some giant collections of neural information will probably turn out to be wastes of time. But “the right dataset will be useful,” he says. And the right bit of data might hold the key that propels neuroscientists forward.
Despite the pessimistic overtones in the paper, Christof Koch of the Allen Institute for Brain Science in Seattle is a fan. “You got to love it,” Koch says. At its heart, the experiment on the 6502 “sends a good message of humility,” he adds. “It will take a lot of hard work by a lot of very clever people for many years to understand the brain.” But he says that tenacity, especially in the face of such a formidable challenge, will eventually lead to clarity.
Zador recently opened a fortune cookie that read, “If the brain were so simple that we could understand it, we would be so simple that we couldn’t.” That quote, from IBM researcher Emerson Pugh, throws down the challenge, Zador says. “The alternative is that we will never understand it,” he says. “I just can’t believe that.”
Editor’s note: When reporting results from the functional MRI scans of dogs’ brains, left and right were accidentally reversed in all images, the researchers report in a correction posted April 7 in Science. While dogs and most humans use different hemispheres of the brain to process meaning and intonation — instead of the same hemispheres, as was suggested — lead author Attila Andics says the more important finding still stands: Dogs’ brains process different aspects of human speech in different hemispheres. Dogs process speech much like people do, a new study finds. Meaningful words like “good boy” activate the left side of a dog’s brain regardless of tone of voice, while a region on the right side of the brain responds to intonation, scientists report in the Sept. 2 Science.
Similarly, humans process the meanings of words in the left hemisphere of the brain, and interpret intonation in the right hemisphere. That lets people sort out words that convey meaning from random sounds that don’t. But it has been unclear whether language abilities were a prerequisite for that division of brain labor, says neuroscientist Attila Andics of Eötvös Loránd University in Budapest.
Dogs make ideal test subjects for understanding speech processing because of their close connection to humans. “Humans use words towards dogs in their everyday, normal communication, and dogs pay attention to this speech in a way that cats and hamsters don’t,” says Andics. “When we want to understand how an animal processes speech, it’s important that speech be relevant.” Andics and his colleagues trained dogs to lie still for functional MRI scans, which reveal when and where the brain is responding to certain cues. Then the scientists played the dogs recordings of a trainer saying either meaningful praise words like “good boy,” or neutral words like “however,” either in an enthusiastic tone of voice or a neutral one. The dogs showed increased activity in the left sides of their brains in response to the meaningful words, but not the neutral ones. An area on the right side of the brain reacted to the intonation of those words, separating out enthusiasm from indifference.
When the dogs heard praising words in an enthusiastic tone of voice, neural circuits associated with reward became more active. The dogs had the same neurological response to an excited “Good dog!” as they might to being petted or receiving a tasty treat. Praise words or enthusiastic intonation alone didn’t have the same effect.
Humans stand out from other animals in their ability to use language — that is, to manipulate sequences of sounds to convey different meanings. But the new findings suggest that the ability to hear these arbitrary sequences of sound and link them to meaning isn’t a uniquely human ability.
“I love these results, as they point to how well domestication has shaped dogs to use and track the very same cues that we use to make sense of what other people are saying,” says Laurie Santos, a cognitive psychologist at Yale University.
While domestication made dogs more attentive to human speech, humans have been close companions with dogs for only 30,000 years. That’s too quickly for a trait like lateralized speech processing to evolve, Andics thinks. He suspects that some older underlying neural mechanism for processing meaningful sounds is present in other animals, too.
It’s just hard to test in other species, he says — in part because cats don’t take as kindly to being put inside MRI scanners and asked to hold still.
A beautiful but unproved theory of particle physics is withering in the harsh light of data.
For decades, many particle physicists have devoted themselves to the beloved theory, known as supersymmetry. But it’s beginning to seem that the zoo of new particles that the theory predicts —the heavier cousins of known particles — may live only in physicists’ imaginations. Or if such particles, known as superpartners, do exist, they’re not what physicists expected.
New data from the world’s most powerful particle accelerator — the Large Hadron Collider, now operating at higher energies than ever before — show no traces of superpartners. And so the theory’s most fervent supporters have begun to pay for their overconfidence — in the form of expensive bottles of brandy. On August 22, a group of physicists who wagered that the LHC would quickly confirm the theory settled a 16-year-old bet. In a session at a physics meeting in Copenhagen, theoretical physicist Nima Arkani-Hamed ponied up, presenting a bottle of cognac to physicists who bet that the new particles would be slow to materialize, or might not exist at all. Whether their pet theories are right or wrong, many theoretical physicists are simply excited that the new LHC data can finally anchor their ideas to reality. “Of course, in the end, nature is going to tell us what’s true,” says theoretical physicist Yonit Hochberg of Cornell University, who spoke on a panel at the meeting.
Supersymmetry is not ruled out by the new data, but if the new particles exist, they must be heavier than scientists expected. “Right now, nature is telling us that if supersymmetry is the right theory, then it doesn’t look exactly like we thought it would,” Hochberg says. Since June 2015, the LHC, at the European particle physics lab CERN near Geneva, has been smashing protons together at higher energies than ever before: 13 trillion electron volts. Physicists had been eager to see if new particles would pop out at these energies. But the results have agreed overwhelmingly with the standard model, the established theory that describes the known particles and their interactions.
It’s a triumph for the standard model, but a letdown for physicists who hope to expose cracks in that theory. “There is a low-level panic,” says theoretical physicist Matthew Buckley of Rutgers University in Piscataway, N.J. “We had a long time without data, and during that time many theorists thought up very compelling ideas. And those ideas have turned out to be wrong.”
Physicists know that the standard model must break down somewhere. It doesn’t explain why the universe contains more matter than antimatter, and it fails to pinpoint the origins of dark matter and dark energy, which make up 95 percent of the matter and energy in the cosmos.
Even the crowning achievement of the LHC, the discovery of the Higgs boson in 2012 (SN: 7/28/2012, p. 5), hints at the sickness within the standard model. The mass of the Higgs boson, at 125 billion electron volts, is vastly smaller than theory naïvely predicts. That mass, physicists worry, is not “natural” — the factors that contribute to the Higgs mass must be finely tuned to cancel each other out and keep the mass small (SN Online: 10/22/13).
Among the many theories that attempt to fix the standard model’s woes, supersymmetry is the most celebrated. “Supersymmetry was this dominant paradigm for 30 years because it was so beautiful, and it was so perfect,” says theoretical physicist Nathaniel Craig of the University of California, Santa Barbara. But supersymmetry is becoming less appealing as the LHC collects more collisions with no signs of superpartners.
Supersymmetry solves three major problems in physics: It explains why the Higgs is so light; it provides a particle that serves as dark matter; and it implies that the three forces of the standard model (electromagnetism and the weak and strong nuclear forces) unite into one at high energies.
If a simple version of supersymmetry is correct, the LHC probably should have detected superpartners already. As the LHC rules out such particles at ever-higher masses, retaining the appealing properties of supersymmetry requires increasingly convoluted theoretical contortions, stripping the idea of some of the elegance that first persuaded scientists to embrace it. “If supersymmetry exists, it is not my parents’ supersymmetry,” says Buckley. “That kind of means it can’t be the most compelling version.”
Still, many physicists are adopting an attitude of “keep calm and carry on.” They aren’t giving up hope that evidence for the theory — or other new particle physics phenomena — will show up soon. “I am not yet particularly worried,” says theoretical physicist Carlos Wagner of the University of Chicago. “I think it’s too early. We just started this process.” The LHC has delivered only 1 percent of the data it will collect over its lifetime. Hopes of quickly finding new phenomena were too optimistic, Wagner says. Experimental physicists, too, maintain that there is plenty of room for new discoveries. But it could take years to uncover them. “I would be very, very happy if we were able to find some new phenomena, some new state of matter, within the first two or three years” of running the LHC at its boosted energy, Tiziano Camporesi of the LHC’s CMS experiment said during a news conference at the International Conference on High Energy Physics, held in Chicago in August. “That would mean that nature has been kind to us.”
But other LHC scientists admit they had expected new discoveries by now. “The fact that we haven’t seen something, I think, is in general quite surprising to the community,” said Guy Wilkinson, spokesperson for the LHCb experiment. “This isn’t a failure — this is perhaps telling us something.” The lack of new particles forces theoretical physicists to consider new explanations for the mass of the Higgs. To be consistent with data, those explanations can’t create new particles the LHC should already have seen.
Some physicists — particularly those of the younger generations — are ready to move on to new ideas. “I’m personally not attached to supersymmetry,” says David Kaplan of Johns Hopkins University. Kaplan and colleagues recently proposed the “relaxion” hypothesis, which allows the Higgs mass to change — or relax — as the universe evolves. Under this theory, the Higgs mass gets stuck at a small value, never reaching the high mass otherwise predicted.
Another idea, which Craig favors, is a family of theories by the name of “neutral naturalness.” Like supersymmetry, this idea proposes symmetries of nature that solve the problem of the Higgs mass, but it doesn’t predict new particles that should have been seen at the LHC. “The theories, they’re not as beautiful as just simple supersymmetry, but they’re motivated by data,” Craig says.
One particularly controversial idea is the multiverse hypothesis. There may be innumerable other universes, with different Higgs masses in each. Perhaps humans observe such a light Higgs because a small mass is necessary for heavy elements like carbon to be produced in stars. People might live in a universe with a small Higgs because it’s the only type of universe life can exist in.
It’s possible that physicists’ fears will be realized — the LHC could deliver the Higgs boson and nothing else. Such a result would leave theoretical physicists with few clues to work with. Still, says Hochberg, “if that’s the case, we’ll still be learning something very deep about nature.”