Even Amelia Earhart couldn’t compete with the great frigate bird. She flew nonstop across the United States for 19 hours in 1932; the frigate bird can stay aloft up to two months without landing, a new study finds. The seabird saves energy on transoceanic treks by capitalizing on the large-scale movement patterns of the atmosphere, researchers report in the July 1 Science. By hitching a ride on favorable winds, the bird can spend more time soaring and less time flapping its wings.
“Frigate birds are really an anomaly,” says Scott Shaffer, an ecologist at San Jose State University in California who wasn’t involved in the study. The large seabird spends much of its life over the open ocean. Both juvenile and adult birds undertake nonstop flights lasting weeks or months, the scientists found. Frigate birds can’t land in the water to catch a meal or take a break because their feathers aren’t waterproof, so scientists weren’t sure how the birds made such extreme journeys.
Researchers attached tiny accelerometers, GPS trackers and heart rate monitors to great frigate birds flying from a tiny island near Madagascar. By pooling data collected over several years, the team re-created what the birds were doing minute-by-minute over long flights — everything from how often the birds flapped their wings to when they dived for food. The birds fly more than 400 kilometers, about equivalent to the distance from Boston to Philadelphia, every day. They don’t even stop to refuel, instead scooping up fish while still in flight.
And when frigate birds do take a break, it’s a quick stopover.
“When they land on a small island, you’d expect they’d stay there for several days. But in fact, they just stay there for a couple hours,” says Henri Weimerskirch, a biologist at the French National Center for Scientific Research in Villiers-en-Bois who led the study. “Even the young birds stay in flight almost continually for more than a year.”
Frigate birds need to be energy Scrooges to fly that far. To minimize wing-flapping time, they seek out routes upward-moving air currents that help them glide and soar over the water. For instance, the birds skirt the edge of the doldrums, a windless region near the equator. On either side of the region, consistent winds make for favorable flying conditions. Frigate birds ride a thermal roller coaster underneath the bank of fluffy cumulus clouds frequently found there, soaring up to altitudes of 600 meters.
Airplanes tend to avoid flying through cumulus clouds because they cause turbulence. So the researchers were surprised to find that frigate birds sometimes use the rising air inside the clouds to get an extra elevation boost — up to nearly 4,000 meters. The extra height means the birds have more time to gradually glide downward before finding a new updraft. That’s an advantage if the clouds (and the helpful air movement patterns they create) are scarce.
It’s not yet clear how frigate birds manage to sleep while on the wing. Weimerskirch suggests they might nap in several-minute bursts while ascending on thermals.
“To me, the most fascinating thing was how incredibly far these frigate birds go in a single flight, and how closely tied those flight patterns are to the long-term average atmospheric condition,” says Curtis Deutsch, an oceanographer at the University of Washington in Seattle. As these atmospheric patterns shift with climate change, frigate birds might change their path, too.
Even robots can use a heart. Or heart cells, at least.
A new stingray bot about the size of a penny relies on light-sensitive heart cells to swim. Zaps with light force the bot’s fins to flutter, letting researchers drive it through a watery obstacle course, Kit Parker of Harvard University and colleagues report in the July 8 Science.
The new work “extends the state of the art — very much so,” says bioengineer Rashid Bashir of the University of Illinois at Urbana-Champaign. “It’s the next level of sophistication for swimming devices.” For decades, the field of robotics has been dominated by bulky, rigid machines made mostly of metal or hard plastic. But in recent years, some researchers have turned toward softer, squishier materials, such as silicones and rubbery plastics (SN: 11/1/14, p.11). And a small group of scientists have taken it one step further: combining soft materials with living cells.
So far, there’s just a handful of papers on these hybrid machines, says Bashir, whose own lab recently reported the invention of tiny, muscle-wrapped bots that inch along like worms in response to light.
In 2012, Parker’s team built a robotic jellyfish out of silicone and heart muscle cells. Electrically stimulating the cells let the jellyfish push itself through water by squeezing its body into a bell shape and then relaxing.
But, Parker says, “the jellyfish just swam.” He and his colleagues couldn’t steer it around a tank. They can, however, steer the new stingray.
He explains the team’s strategy with a story about his daughter. When she was little, Parker would point his laser pointer at the sidewalk and she’d try to stomp on the dot. He could guide her down a path as she followed the light. “She got to be independent and I got to make sure she didn’t step out into traffic.” Parker guides his stingray bot in a similar way.
Layered on top of the bot’s body — a gold skeleton sandwiched between layers of silicone — lies a serpentine pattern of cells. The pattern is made up of about 200,000 these cells, harvested from rat hearts and then genetically engineered to contract when hit with pulses of blue light. Flashing the light at the bot sets off a wave of contractions, making the fins undulate, like a flag rippling in the wind. To make the stingray turn, the team stimulates the bot’s right and left fins separately. Faster flashing on the right side makes the ray turn left and vice versa, Parker says.
By moving the lights slowly across a fluid-filled chamber, the researchers led the bot in a curving path around three obstacles.
“It’s very impressive,” says MIT computer scientist Daniela Rus. The stingray is “capable of a new type locomotion that had not been seen before” in robots, she says.
Bashir says he can envision such devices one day used in biomedicine or even environmental cleanup: Perhaps researchers could program cells on a swimming bot to suck toxicants out of lakes or streams. But the work is still in its early days, he says.
Parker, a bioengineer interested in cardiac cell biology, has something entirely different in mind. He wants to create an artificial heart that children born with malformed hearts could use as a replacement. Like a heart, a stingray’s muscular body is a pump, he says, designed to move fluids. The robot gave Parker a chance to work on assembling a pump made with living materials.
“Some engineers build things out of aluminum. I build things out of cells — and I need to practice,” he says. “So I practice building pumps.”
There’s another upside to the robot too, he adds: “It’s cool and fun.”
Aging happens to each of us, everywhere, all the time. It is so ever-present and slow that we tend to take little notice of it. Until we do. Those small losses in function and health eventually accumulate into life-changers.
Despite its constancy in our lives, aging remains mysterious on a fundamental level. Scientists still struggle to fully explain its root causes and its myriad effects. Even as discoveries pile up (SN: 12/26/15, p. 20), a clear picture has yet to emerge. Debates continue about whether individual life spans and the problems associated with aging are programmed into our bodies, like ticking time bombs we carry from birth. Others see the process as a buildup of tiny failures, a chaotic and runaway deterioration that steals vim and vigor, if not health and life itself. There is no unified theory of aging. That means that there is no one way to stop it. As longtime aging researcher Caleb Finch put it in an interview with Science News: Aging is still a black box. The issue is an urgent one. The globe’s population has never been older. According to the U.S. Census Bureau’s 2015 An Aging World report, by 2020 the number of people 65 and older worldwide will outnumber children 5 and under for the first time in history. Seniors will make up 22.1 percent of the U.S. population in 2050, and nearly 17 percent globally (a whopping 1.6 billion people), the demographers predict. Worldwide, the 80-and-above crowd will grow from 126 million to 447 million. It’s a population sea change that will have ripple effects on culture, economics, medicine and society.
Scientists working at the frontiers of the field do agree that there are probably many ways to slow aging, Tina Hesman Saey reports in this special issue. Saey sums up current thinking on the actors of aging, as well as a number of intriguing approaches that might well tame aging’s effects. The goal, most agree, is not to find a fountain of youth but the keys to prolonging health.
It turns out that healthy aging in people does occur naturally. It is, however, in the words of Ali Torkamani, “an extremely rare phenotype.” Torkamani leads a genetic study of people 80 and older who are living free of chronic disease, described by Saey in her story. He and his team failed to find a single set of genes that protect these “wellderly.” Instead, the people studied carry a plethora of different genetic variants. They do share a lower risk of heart disease and Alzheimer’s. And, he says, the data hint that gene variants linked to key cognitive areas may be at play, leading him to ask: “Is cognitive health just one of the components of healthy aging? Or is there something about having a healthy brain that protects against other signs of aging?”
Exactly what happens in the brain as we age is a question Laura Sanders takes up in “The mature mind.” An intriguing idea is that the brain begins to lose the specialization that makes it so efficient in its prime, she reports. Further afield, Susan Milius considers a hydra and a weed, examining what these outliers of aging can tell us about how aging evolved and how flexible it truly is. Her answer: Very. The sheer diversity in life cycles and declines gives credence to arguments that while death may come for all of us, a robust old age could well be in the cards for more of us.
Those little piles of dirt that ant colonies leave on the ground are an indication that ants are busy underground. And they’re moving more soil and sediment than you might think. A new study finds that, over a hectare, colonies of Trachymyrmex septentrionalis fungus-gardening ants in Florida can move some 800 kilograms aboveground and another 200 kilograms below in a year.
The question of how much soil and sand ants can move originated not with entomologists but with geologists and archaeologists. These scientists use a technique called optically stimulated luminescence, or OSL, to date layers of sediment. When minerals such as quartz are exposed to the sun, they suck up and store energy. Scientists can use the amount of energy in buried minerals to determine when they last sat on the surface, taking in the sun.
But ants might muck this up. To find out, a group of geologists and archaeologists reached out to Walter Tschinkel, an entomologist at Florida State University. Figuring out how much sand and soil ants dig up and deposit on the surface — called biomantling — is relatively easy, especially if the color of the soil they’re digging up is different from that found on the ground. But tracking movement underground, or bioturbation, is a bit more complicated. Tschinkel and his former student Jon Seal, now an ecologist at the University of Texas at Tyler, turned to an area of the Apalachicola National Forest in Florida dubbed “Ant Heaven” for its abundant and diverse collection of ants. Tschinkel has worked there since the 1970s, and for the last six years, he has been monitoring some 450 colonies of harvester ants, which bring up plenty of sandy soil from underground. But he was also curious about the fungus-gardening ants.
Tschinkel and Seal had already shown that the fungus-gardening ant “is extremely abundant, that it moves a very large amount of soil, and that as the summer warms up, it digs a deeper chamber and deposits that soil in higher chambers without exposing it to light,” Tschinkel says. “In other words, it appeared to do a very large amount of soil mixing of the type [that had been] described in harvester ants.”
No one had ever quantified an ant colony’s subterranean digging before. Tschinkel and Seal started by digging 10 holes a meter deep and filling them with layers of native sand mixed with various colors of art sand — pink, blue, purple or yellow, green and orange, with plain forest sand at the top. Each hole was then topped with a cage, and an ant colony was transferred with the fungus that the ants cultivate like a crop. Throughout the experiment, the researchers collected sand that the ants deposited on the surface and provided the colonies with food for their fungus, including leaves, small flowers and oatmeal. Seven months later, Tschinkel and Seal carefully excavated the nine surviving ant colonies and quantified grains of sand moved from one sand layer to another. The team reports its findings July 8 in PLOS ONE.
By the end of the study, each ant colony had deposited an average of 758 grams of sand on the surface and moved another 153 grams between one colored layer and another underground, mostly upward. The ants dug chambers to farm their fungus, and they sometimes filled them up with sand from deeper layers as they dug new chambers in areas with temperature and humidity best suited for cultivation. With more than a thousand nests per hectare, the ants may be moving about a metric ton of sand each year, covering the surface with 6 centimeters of soil over the course of a millennium, the researchers calculated.
All of this mixing and moving could prove a challenge for geologists and archaeologists relying on OSL. “When ants deposit sand from deeper levels at higher levels (or the reverse), they are mixing sand with different light-emitting capacity, and therefore with different measured ages,” Tschinkel notes. “People who use OSL need to know how much such mixing occurs, and then devise ways of dealing with it.” Now that scientists know that ants could be a problem, they should be able to develop ways to work around the little insects.
That’s the takeaway of a new study of snail fever, or schistosomiasis, a tropical disease that affects more than 250 million people worldwide. It’s caused by a water-borne parasite that reproduces inside some snails. Parasite larvae burrow through people’s skin and can cause infertility, cognitive problems and even cancer. Today, most countries manage the disease with a drug that kills the parasite in human hosts. Some nations also control snail populations to hamstring the parasite’s life cycle, but that’s a less popular approach.
But snail control turns out to be more effective than drugs for curbing snail fever, researchers report July 21 in PLOS Neglected Tropical Diseases. The scientists compared a range of disease management strategies in 83 countries in the last century that included killing snails, using drugs or changing infrastructure (such as sanitation services). Projects using snail control cut disease by over 90 percent; those without it, by less than 40 percent.
The researchers suggest a blend of drug therapy and snail management to eradicate disease in the future.
Pollen tainted with neonicotinoid pesticides could interfere with male honeybee reproduction, a new study finds.
After bee colonies fed on pollen spiked with the pesticides thiamethoxam and clothianidin, male bees, or drones, produced almost 40 percent fewer living sperm than did males from colonies fed clean pollen, researchers report July 27 in Proceedings of the Royal Society B. The concentrations of the pesticides, 4.5 parts per billion and 1.5 parts per billion, respectively, were in the range of what free-living bees encounter when foraging around crops, study coauthor Lars Straub of the University of Bern, Switzerland, says.
Pollinator conservationists have raised concerns that chronic exposure to neonicotinoids widely used on crops is inadvertently weakening honeybee colonies working the fields. The amount of sperm males produce might affect how well a colony sustains itself because young queens mate (with about 15 males on average) during one or two early frenzies and then depend on that stored sperm for the rest of their egg-laying years. The new study is the first to examine neonicotinoid effects on honeybee sperm, Straub says.
Young sunflowers grow better when they track the sun’s daily motion from east to west across the sky. An internal clock helps control the behavior, biologist Stacey Harmer and colleagues report in the Aug. 5 Science.
Depending on the time of day, certain growth genes appear to be activated to different degrees on opposing sides of young sunflowers’ stems. The east side of their stems grow faster during the day, causing the stems to gradually bend from east to west. The west side grows faster at night, reorienting the plants to prepare them for the next morning. “At dawn, they’re already facing east again,” says Harmer, of the University of California, Davis. The behavior helped sunflowers grow bigger, her team found. Young plants continued to grow from east to west each day even when their light source didn’t move. So Harmer and her colleagues concluded that the behavior was influenced by an internal clock like the one that controls human sleep/wake cycles, instead of being solely in response to available light.
That’s probably advantageous, Harmer says, “because you have a system that’s set up to run even if the environment changes transiently.” A cloudy morning doesn’t stop the plants from tracking, for instance.
Contrary to popular belief, mature sunflowers don’t track the sun — they perpetually face east. That’s probably because their stems have stopped growing. But Harmer and her colleagues found an advantage for the fixed orientation, too: Eastern-facing heads get warmer in the sun than westward-facing ones and attract more pollinators.
Pulling consecutive all-nighters makes some brain areas groggier than others. Regions involved with problem solving and concentration become especially sluggish when sleep-deprived, a new study using brain scans reveals. Other areas keep ticking along, appearing to be less affected by a mounting sleep debt.
The results might lead to a better understanding of the rhythmic nature of symptoms in certain psychiatric or neurodegenerative disorders, says study coauthor Derk-Jan Dijk. People with dementia, for instance, can be afflicted with “sundowning,” which worsens their symptoms at the end of the day. More broadly, the findings, published August 12 in Science, document the brain’s response to too little shut-eye. “We’ve shown what shift workers already know,” says Dijk, of the University of Surrey in England. “Being awake at 6 a.m. after a night of no sleep, it isn’t easy. But what wasn’t known was the remarkably different response of these brain areas.”
The research reveals the differing effects of the two major factors that influence when you conk out: the body’s roughly 24-hour circadian clock, which helps keep you awake in the daytime and put you to sleep when it’s dark, and the body’s drive to sleep, which steadily increases the longer you’re awake.
Dijk and collaborators at the University of Liege in Belgium assessed the cognitive function of 33 young adults who went without sleep for 42 hours. Over the course of this sleepless period, the participants performed some simple tasks testing reaction time and memory. The sleepy subjects also underwent 12 brain scans during their ordeal and another scan after 12 hours of recovery sleep. Throughout the study, the researchers also measured participants’ levels of the sleep hormone melatonin, which served as a way to track the hands on their master circadian clocks.
Activity in some brain areas, such as the thalamus, a central hub that connects many other structures, waxed and waned in sync with the circadian clock. But in other areas, especially those in the brain’s outer layer, the effects of this master clock were overridden by the body’s drive to sleep. Brain activity diminished in these regions as sleep debt mounted, the scans showed.
Sleep deprivation also meddled with the participants’ performance on simple tasks, effects influenced both by the mounting sleep debt and the cycles of the master clock. Performance suffered in the night, but improved somewhat during the second day, even after no sleep. While the brain’s circadian clock signal is known to originate in a cluster of nerve cells known as the suprachiasmatic nucleus, it isn’t clear where the drive to sleep comes from, says Charles Czeisler, a sleep expert at Harvard Medical School. The need to sleep might grow as toxic metabolites build up after a day’s worth of brain activity, or be triggered when certain regions run out of fuel.
Sleep drive’s origin is just one of many questions raised by the research, says Czeisler, who says the study “opens up a new era in our understanding of sleep-wake neurobiology.” The approach of tracking activity with brain scans and melatonin measurements might reveal, for example, how a lack of sleep during the teenage years influences brain development.
Such an approach also might lead to the development of a test that reflects the strength of the body’s sleep drive, Czeisler says. That measurement might help clinicians spot chronic sleep deprivation, a health threat that can masquerade as attention-deficit/hyperactivity disorder in children.
Blue whirl Bloo werl n. A swirling flame that appears in fuel floating on the surface of water and glows blue.
An unfortunate mix of electricity and bourbon has led to a new discovery. After lightning hit a Jim Beam warehouse in 2003, a nearby lake was set ablaze when the distilled spirit spilled into the water and ignited. Spiraling tornadoes of fire leapt from the surface. In a laboratory experiment inspired by the conflagration, a team of researchers produced a new, efficiently burning fire tornado, which they named a blue whirl. To re-create the bourbon-fire conditions, the researchers, led by Elaine Oran of the University of Maryland in College Park, ignited liquid fuel floating on a bath of water. They surrounded the blaze with a cylindrical structure that funneled air into the flame to create a vortex with a height of about 60 centimeters. Eventually, the chaotic fire whirl calmed into a blue, cone-shaped flame just a few centimeters tall, the scientists report online August 4 in Proceedings of the National Academy of Sciences.
“Firenadoes” are known to appear in wildfires, when swirling winds and flames combine to form a hellacious, rotating inferno. They burn more efficiently than typical fires, as the whipping winds mix in extra oxygen, which feeds the fire. But the blue whirl is even more efficient; its azure glow indicates complete combustion, which releases little soot, or uncombusted carbon, to the air.
The soot-free blue whirls could be a way of burning off oil spills on water without adding much pollution to the air, the researchers say, if they can find a way to control them in the wild.
Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment.
Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works.
“The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods.
By the end of their experiments, Jonas and Kording had discovered almost nothing.
Their results — or lack thereof — hit a nerve among neuroscientists. When Jonas presented the work last year at a Kavli Foundation workshop held at MIT, the response from the crowd was split. “A bunch of people said, ‘That’s awesome. I had that idea 10 years ago and never got around to doing it,’ ” Jonas says. “And a bunch of people were like, ‘That’s bullshit. You’re taking the analogy way too far. You’re attacking a straw man.’ ” On May 26, Jonas and Kording shared their results with a wider audience by posting a manuscript on the website bioRxiv.org. Bottom line of their report: Some of the best tools used by neuro-scientists turned up plenty of data but failed to reveal anything meaningful about a relatively simple machine. The implications are profound — and discouraging. Current neuro-science methods might not be up for the job when it comes to truly understanding the brain.
The paper “does a great job of articulating something that most thoughtful people believe but haven’t said out loud,” says neuroscientist Anthony Zador of Cold Spring Harbor Laboratory in New York. “Their point is that it’s not clear that the current methods would ever allow us to understand how the brain computes in [a] fundamental way,” he says. “And I don’t necessarily disagree.”
Differences and similarities Critics, however, contend that the analogy of the brain as a computer is flawed. Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, Calif., for instance, calls the comparison “provocative, but misleading.” The brain and the microprocessor are distinct in a huge number of ways. The brain can behave differently in different situations, a variability that adds an element of randomness to its machinations; computers aim to serve up the same response to the same situation every time. And compared with a microprocessor, the brain has an incredible amount of redundancy, with multiple circuits able to step in and compensate when others malfunction.
In microprocessors, the software is distinct from the hardware — any number of programs can run on the same machine. “This is not the case in the brain, where the software is the hardware,” Sejnowski says. And this hardware changes from minute to minute. Unlike the microprocessor’s connections, brain circuits morph every time you learn something new. Synapses grow and connect nerve cells, storing new knowledge.
Brains and microprocessors have very different origins, Sejnowski points out. The human brain has been sculpted over millions of years of evolution to be incredibly specialized, able to spot an angry face at a glance, for instance, or remember a childhood song for years. The 6502, which debuted in 1975, was designed by a small team of humans, who engineered the chip to their exact specifications. The methods for understanding one shouldn’t be expected to work for the other, Sejnowski says.
Yet there are some undeniable similarities. Brains and microprocessors are both built from many small units: 86 billion neurons and 3,510 transistors, respectively. These units can be organized into specialized modules that allow both “organs” to flexibly move information around and hold memories. Those shared traits make the 6502 a legitimate and informative model organism, Jonas and Kording argue. In one experiment, they tested what would happen if they tried to break the 6502 bit by bit. Using a simulation to run their experiments, the researchers systematically knocked out every single transistor one at a time. They wanted to know which transistors were mission-critical to three important “behaviors”: Donkey Kong, Space Invaders and Pitfall. The effort was akin to what neuroscientists call “lesion studies,” which probe how the brain behaves when a certain area is damaged.
The experiment netted 1,565 transistors that could be eliminated without any consequences to the games. But other transistors proved essential. Losing any one of 1,560 transistors made it impossible for the microprocessor to load any of the games.
Big gap Those results are hard to parse into something meaningful. This type of experiment, just as those in human and animal brains, are informative in some ways. But they don’t constitute understanding, Jonas argues. The gulf between knowing that a particular broken transistor can stymie a game and actually understanding how that transistor helps compute is “incredibly vast,” he says.
The transistor “lesion” experiment “gets at the core problem that we are struggling with in neuro-science,” Zador says. “Although we can attribute different brain functions to different brain areas, we don’t actually understand how the brain computes.” Other experiments reported in the study turned up red herrings — results that looked similar to potentially useful brain data, but were ultimately meaningless. Jonas and Kording looked at the average activity of groups of nearby transistors to assess patterns about how the microprocessor works. Neuro-scientists do something similar when they analyze electrical patterns of groups of neurons. In this task, the microprocessor delivered some good-looking data. Oscillations of activity rippled over the microprocessor in patterns that seemed similar to those of the brain. Unfortunately, those signals are irrelevant to how the computer chip actually operates.
Data from other experiments revealed a few finds, including that the microprocessor contains a clock signal and that it switches between reading and writing memory. Yet these are not key insights into how the chip actually handles information, Jonas and Kording write in their paper.
It’s not that analogous experiments on the brain are useless, Jonas says. But he hopes that these examples reveal how big of a challenge it will be to move from experimental results to a true understanding. “We really need to be honest about what we’re going to pull out here.” Jonas says the results should caution against collecting big datasets in the absence of theories that can help guide experiments and that can be verified or refuted. For the microprocessor, the researchers had a lot of data, yet still couldn’t separate the informative wheat from the distracting chaff. The results “suggest that we need to try and push a little bit more toward testable theories,” he says.
That’s not to say that big datasets are useless, he is quick to point out. Zador agrees. Some giant collections of neural information will probably turn out to be wastes of time. But “the right dataset will be useful,” he says. And the right bit of data might hold the key that propels neuroscientists forward.
Despite the pessimistic overtones in the paper, Christof Koch of the Allen Institute for Brain Science in Seattle is a fan. “You got to love it,” Koch says. At its heart, the experiment on the 6502 “sends a good message of humility,” he adds. “It will take a lot of hard work by a lot of very clever people for many years to understand the brain.” But he says that tenacity, especially in the face of such a formidable challenge, will eventually lead to clarity.
Zador recently opened a fortune cookie that read, “If the brain were so simple that we could understand it, we would be so simple that we couldn’t.” That quote, from IBM researcher Emerson Pugh, throws down the challenge, Zador says. “The alternative is that we will never understand it,” he says. “I just can’t believe that.”