Science relies on work of young research standouts

This issue marks the second year that Science News has reached out to science notables and asked: Which up-and-coming scientist is making a splash? Whose work impresses you? Tell us about early- to mid-career scientists who have the potential to change their fields and the direction of science more generally.

This year, we expanded the pool of people we asked. We reached out to Nobel laureates again and added recently elected members of the National Academy of Sciences. That allowed us to consider shining lights from a much broader array of fields, from oceanography and astronomy to cognitive psychology. Another difference this year: We spent time face-to-face with many of those selected, to get a better sense of them both as scientists and as people.
The result is the SN 10, a collection of stories not only about science, but also about making a life in science. They are stories of people succeeding because they have found what they love, be it working in the lab on new ways to probe molecular structures or staring up to the stars in search of glimmers of the early universe. In my interviews with chemist Phil Baran, I was struck by his drive to do things in new ways, whether devising chemical reactions or developing ideas about how to fund research. (If you can, he says, go private.) Laura Sanders, who met with neuroscientist Jeremy Freeman, was intrigued by his way of seeing a problem (siloed data that can’t be easily shared or analyzed) and figuring out solutions, even if those solutions were outside his area of expertise.

Of course, there are many ways to identify noteworthy scientists — and there’s plenty more fodder out there for future years. Our approach was to seek standouts, asking who deserved recognition for the skill of their methods, the insights of their thinking, the impacts of their research. Not all of the SN 10’s work has made headlines, but they all share something more important: They are participants in building the science of the future.

Notably, many of them do basic research. I think that’s because it’s the type of work that other scientists notice, even if it’s not always on the radar of the general public. But that’s where fundamental advances are often made, as scientists explore the unknown.

That edge of what’s known is where Science News likes to explore, too. Such as the bet-ending, head-scratching results from the Large Hadron Collider, which have failed to reveal the particles that the equations of supersymmetry predict. As Emily Conover reports in “Supersymmetry’s absence at LHC puzzles physicists,” that means that either the theory must be more complicated than originally thought, or not true, letting down those who looked to supersymmetry to help explain a few enduring mysteries, from the nature of dark matter to the mass of the Higgs boson.

Other mysteries may be closer to a solution, as Sanders reports in “New Alzheimer’s drug shows promise in small trial.” A new potential treatment for Alzheimer’s disease reduced amyloid-beta plaques in patients. It also showed hints of improving cognition. That’s standout news, a result built on decades of basic research by many, many bright young scientists.

Wi-Fi can help house distinguish between members

In smart homes of the future, computers may identify inhabitants and cater to their needs using a tool already at hand: Wi-Fi. Human bodies partially block the radio waves that carry the wireless signal between router and computer. Differences in shape, size and even gait among household members yield different patterns in the received Wi-Fi signals. A computer can analyze the signals to distinguish dad from mom, according to a report posted online August 11 at arXiv.org.

Scientists built an algorithm that was nearly 95 percent accurate when attempting to discern two adults walking between a wireless router and a computer. For six people, accuracy fell to about 89 percent. Scientists tested the setup on men and women of various sizes, but it should work with children as well, says study coauthor Bin Guo of Northwestern Polytechnical University in Xi’an, China.

In a home rigged with Wi-Fi and a receiver, the system could eventually identify family members and tailor heating and lighting to their preferences — maybe even cue up a favorite playlist.

Rarest nucleus reluctant to decay

Nature’s rarest type of atomic nucleus is not giving up its secrets easily.

Scientists looking for the decay of an unusual form of the element tantalum, known as tantalum-180m, have come up empty-handed. Tantalum-180m’s hesitance to decay indicates that it has a half-life of at least 45 million billion years, Bjoern Lehnert and colleagues report online September 13 at arXiv.org. “The half-life is longer than a million times the age of the universe,” says Lehnert, a nuclear physicist at Carleton University in Ottawa. (Scientists estimate the universe’s age at 13.8 billion years.)
Making up less than two ten-thousandths of a percent of the mass of the Earth’s crust, the metal tantalum is uncommon. And tantalum-180m is even harder to find. Only 0.01 percent of tantalum is found in this state, making it the rarest known long-lived nuclide, or variety of atom.

Tantalum-180m is a bit of an oddball. It is what’s known as an isomer — its nucleus exists in an “excited,” or high-energy, configuration. Normally, an excited nucleus would quickly drop to a lower energy state, emitting a photon — a particle of light — in the process. But tantalum-180m is “metastable” (hence the “m” in its name), meaning that it gets stuck in its high-energy state.
Tantalum-180m is thought to decay by emitting or capturing an electron, morphing into another element — either tungsten or hafnium — in the process. But this decay has never been observed. Other unusual nuclides, such as those that decay by emitting two electrons simultaneously, can have even longer half-lives than tantalum-180m. But tantalum-180m is unique — it is the longest-lived isomer found in nature.
“It’s a very interesting nucleus,” says nuclear physicist Eric Norman of the University of California, Berkeley, who was not involved with the study. Scientists don’t have a good understanding of such unusual decays, and a measurement of the half-life would help scientists pin down the details of the process and the nucleus’ structure.
Lehnert and colleagues observed a sample of tantalum with a detector designed to catch photons emitted in the decay process. After running the experiment for 176 days, and adding in data from previous incarnations of the experiment, the team saw no evidence of decay. The half-life couldn’t be shorter than 45 million billion years, the scientists determined, or they would have seen some hint of the process. “They did a state-of-the-art measurement,” says Norman. “It’s a very difficult thing to see.”

The presence of tantalum-180m in nature is itself a bit of a mystery, too. The element-forging processes that occur in stars and supernovas seem to bypass the nuclide. “People don’t really understand how it is created at all,” says Lehnert.

Tantalum-180m is interesting as a potential energy source, says Norman, although “it’s kind of a crazy idea.” If scientists could find a way to tap the energy stored in the excited nucleus by causing it to decay, it might be useful for applications like nuclear lasers, he says.

Molecules for making nanomachines snare chemistry Nobel

Motors too small to see with the eye may soon have the power to drive innovations in chemistry, biology and computing. Three creators of such nanoscopic machines were honored October 5 with the Nobel Prize in chemistry.

Sharing the prize of 8 million Swedish kronor (about $930,000) equally are Jean-Pierre Sauvage, J. Fraser Stoddart and Bernard Feringa. “If you had to choose three people at the top of the field, that’s it. These are the men,” says James Tour, a na
Recognition of the burgeoning field of molecular motors will draw more money and inspire children to become scientists, says Donna Nelson, an organic chemist at the University of Oklahoma in Norman and the president of the American Chemical Society. “It will benefit not only these three chemists, it will benefit the entire field of chemistry.”
Chemists and physicists have envisioned molecular machines since at least the 1960s, but were never able to reliably produce complex structures. Then in 1983, Sauvage, of the University of Strasbourg in France, devised a method for making interlocking molecular rings, or catenanes. Sauvage’s molecular chain set the stage for the rest of the field (SN: 9/8/90, p. 149).

Stoddart, of Northwestern University in Evanston, Ill., improved the efficiency so that he could produce large quantities of molecular machines, starting in 1991 with rings clipped around a central axle. That structure is known as a rotaxane. He and colleagues learned to control the slide of the rings along the axle, making a simple molecular switch. Such switches could be used to create molecular computers or drug delivery systems. Stoddart showed in 2000 that it was possible to make molecular “muscles” using interlocking rings and axles. Stoddart and colleagues have since devised molecular elevators and pumps based on the same molecules.
Feringa, of the University of Groningen in the Netherlands, ramped things up another notch in 1999 by building the first molecular motor. Things move so differently at the molecular scale that many researchers weren’t sure anyone could precisely control the motion of molecular motors, says R. Dean Astumian of the University of Maine in Orono. Feringa’s innovation was to devise asymmetric molecules that would spin in one direction when hit with a pulse of light.

Up to 50,000 of the motors could span the width of a human hair, says Tour. Alone, one of the spinning motors doesn’t pack much punch (SN: 2/7/04, p. 94), but harnessed together in large numbers the little motors can do big work, he says. Groups of the whirring motors powered by light can rotate a glass rod thousands of times their size and do other work on a macroscopic scale. Feringa also harnessed his motors into a four-wheel-drive “nanocar” (SN: 12/17/11, p. 8).

The process of making molecular machines has improved drastically over recent decades, thanks in large part to the work of the three newly christened laureates, says Rigoberto Advincula, a chemist at Case Western Reserve University in Cleveland. Scientists have a better understanding of how to construct molecules that more reliably bend, loop and connect to form shapes. “You don’t have tweezers to put them together,” he says. “You template the reaction so that the thread to goes through the ring. That then makes it easier for the two thread ends to meet each other.” New techniques have also allowed the production of more intricate shapes. Further development will bring these processes to even bigger scales, allowing for the design of molecular machines for everything from energy harvesting to building protein complexes, Advincula says.
Such applications are still on the horizon and no one really knows what sorts of machines chemists can make from molecules yet. When people question Feringa about what his molecular motors can be used for, he “feels a bit like the Wright brothers” when people asked them after their first flight why they needed a flying machine, he said during a telephone call during the announcement of the prize. There are “endless opportunities,” including nanomachines that can seek and destroy tumor cells or deliver drugs to just the cells that need them, Feringa speculated.

Stoddart, who was born in Edinburgh and moved to the United States in 1997, applauded the Nobel committee for recognizing “a piece of chemistry that is extremely fundamental in its making and being.” Sauvage, in particular, created a new type of molecular bond in order to forge his chain, Stoddart said during a news conference. “New chemical compounds are probably several thousand a day worldwide,” he said. “New chemical reactions, well, maybe a dozen or two a month. Maybe I go over the top there. But new bonds, they are few and far between. They are really the blue moons. So I think that’s what’s being recognized, more than anything.”

Cosmic census of galaxies updated to 2 trillion

Two trillion galaxies. That’s the latest estimate for the number of galaxies that live — or have lived — in the observable universe, researchers report online October 10 at arXiv.org. This updated headcount is roughly 10 times greater than previous estimates and suggests that there are a lot more galaxies out there for future telescopes to explore.

Hordes of relatively tiny galaxies, weighing as little as 1 million suns, are responsible for most of this tweak to the cosmic census. Astronomers haven’t directly seen these galaxies yet. Christopher Conselice, an astrophysicist at the University of Nottingham in England, and colleagues combined data from many ground- and space-based telescopes to look at how the number of galaxies in a typical volume of the universe has changed over much of cosmic history. They then calculated how many galaxies have come and gone in the universe.

The galactic population has dwindled over time, as most of those 2 trillion galaxies collided and merged to build larger galaxies such as the Milky Way, the researchers suggest. That’s in line with prevailing ideas about how massive galaxies have been assembled. Seeing many of these remote runts, however, is beyond the ability of even the next generation of telescopes. “We will have to wait at least several decades before even the majority of galaxies have basic imaging,” the researchers write.

Staph infections still a concern

New hope for control of staph infections

Staphylococcal infections — especially rampant in hospitals and responsible for … some fatal disorders — may be virtually stamped out. Researchers … have extracted teichoic acid from the bacteria’s cell wall and used it to protect groups of mice from subsequent massive doses of virulent staph organisms. — Science News, October 29, 1966

UPDATE
Staphylococcus aureus has not been conquered. As antibiotic resistance grows, the pressure is on to find ways to stop the deadly microbe. A vaccine that targets S. aureus’ various routes of infection is being tested in patients having back surgery. Ideally, doctors would use the vaccine to protect hospital patients and people with weakened immune systems. This vaccine is the furthest along among several others in development. Meanwhile, a natural anti­biotic recently found in human noses may lead to drugs that target antibiotic-resistant staph (SN: 8/20/16, p. 7).

Virus triggers immune proteins to aid enemy

Crucial immune system proteins that make it harder for viruses to replicate might also help the attackers avoid detection, three new studies suggest. When faced with certain viruses, the proteins can set off a cascade of cell-to-cell messages that destroy antibody-producing immune cells. With those virus-fighting cells depleted, it’s easier for the invader to persist inside the host’s body.

The finding begins to explain a longstanding conundrum: how certain chronic viral infections can dodge the immune system’s antibody response, says David Brooks, an immunologist at the University of Toronto not involved in the research. The new studies, all published October 21 in Science Immunology, pin the blame on the same set of proteins: type 1 interferons.
Normally, type 1 interferons protect the body from viral siege. They snap into action when a virus infects cells, helping to activate other parts of the immune system. And they make cells less hospitable to viruses so that the foreign invaders can’t replicate as easily.

But in three separate studies, scientists tracked mice’s immune response when infected with lymphocytic choriomeningitis virus, or LCMV. In each case, type 1 interferon proteins masterminded the loss of B cells, which produce antibodies specific to the virus that is being fought. Normally, those antibodies latch on to the target virus, flagging it for destruction by other immune cells called T cells. With fewer B cells, the virus can evade capture for longer.

The proteins’ response “is driving the immune system to do something bad to itself,” says Dorian McGavern, an immunologist at the National Institute of Neurological Disorders and Stroke in Bethesda, Md., who led one of the studies.

The interferon proteins didn’t directly destroy the B cells; they worked through middlemen instead. These intermediaries differed depending on factors including the site of infection and how much of the virus the mice received.
T cells were one intermediary. McGavern and his colleagues filmed T cells actively destroying their B cell compatriots under the direction of the interferon proteins. When the scientists deleted those T cells, the B cells didn’t die off even though the interferons were still hanging around.
Another study found that the interferons were sending messages not just through T cells, but via a cadre of other immune cells, too. Those messages told B cells to morph into cells that rapidly produce antibodies for the virus. But those cells die off within a few days instead of mounting a longer-term defense.

That strategy could be helpful for a short-term infection, but less successful against a chronic one, says Daniel Pinschewer, a virologist at the University of Basel in Switzerland who led that study. Throwing the entire defense arsenal at the virus all at once leaves the immune system shorthanded later on.

But interferon activity could prolong even short-term viral infections, a third study showed. There, scientists injected lower doses of LCMV into mice’s footpads and used high-powered microscopes to watch the infection play out in the lymph nodes. In this case, the interferon stifled B cells by working through inflammatory monocytes, white blood cells that rush to infection sites.

“The net effect is beneficial for the virus,” says Matteo Iannacone, an immunologist at San Raffaele Scientific Institute in Milan who led the third study. Sticking around even a few days longer gives the virus more time to spread to new hosts.

Since all three studies looked at the same virus, it’s not yet clear whether the mechanism extends to other viral infections. That’s a target for future research, Iannacone says. But Brooks thinks it’s likely that other viruses that dampen antibody response (like HIV and hepatitis C) could also be exploiting type 1 interferons.

For robots, artificial intelligence gets physical

In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.
For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.

“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.
Such machines of the future are a far cry from that shiny white surgery robot in the D.C. lab, essentially an arm atop a cart. But today’s fledgling sensing robots mark the slow awakening of machines to the world around them, and themselves.

“By adding just a little bit of awareness to the machine,” says pediatric surgeon Peter Kim of the children’s hospital, “there’s a huge amount of benefit to gain.”

Born to run
The pint-size machine running around Stanford’s campus doesn’t look especially self-aware.
It’s a rugged sort of robot, with stacked circuit boards and bundles of colorful wires loaded on its back. It scampers over grass, gravel, asphalt — any surface roboticist Alice Wu can find.

For weeks this summer, Wu took the traveling bot outside, placed it on the ground, and then, “I let her run,” she says. The bot isn’t that fast (its top speed is about a half a meter per second), and it doesn’t go far, but Wu is trying to give it something special: a sense of touch. Wu calls the bot SAIL-R, for Sensorized Adaptive Intelligence Legged Robot.

Fixed to each of its six C-shaped legs are tactile sensors that can tell how hard the robot hits the ground. Most robots don’t have tactile sensing on their feet, Wu says. “When I first got into this, I thought that was crazy. So much effort is focused on hands and arms.” But feet make contact with the world too.

Feeling the ground, in fact, is crucial for walking. Most people tailor their gait to different surfaces without even thinking, feet pounding the ground on a run over grass, or slowing down on a street glazed with ice. Wu wants to make robots that, like humans, sense the surface they’re on and adjust their walk accordingly.

Walking robots have already ventured out into the world: Last year, a competition sponsored by DARPA, the Department of Defense agency that funds advanced research, showcased a lineup of semiautonomous robots that walked over rubble and even climbed stairs (SN: 12/13/14, p. 16). But they didn’t do it on their own; hidden away in control rooms, human operators pulled the strings.

One day, Wu says, machines could feel the ground and learn for themselves the most efficient way to walk. But that’s a tall order. For one, researchers can’t simply glue the delicate sensors designed for a robot’s hands onto its feet. “The feet are literally whacking the sensor against the ground very, very hard,” Wu says. “It’s unforgiving contact.”

That’s the challenge with tactile sensing in general, says Cutkosky, Wu’s adviser at Stanford. Scientists have to build sensors that are tough, that can survive impact and abrasion and bending and water. It’s one reason physical intelligence has advanced so slowly, he says.

“You can’t just feed a supercomputer thousands of training examples,” Cutkosky says, the way AlphaGo learned how to play Go (SN Online: 3/15/16). “You actually have to build things that interact with the world.”
Cutkosky would know. His lab is famous for building such machines: tiny “microTugs” that can team up, antlike, to pull a car, and a gecko-inspired “Stickybot” that climbs walls. Tactile sensing could make these and other robots smarter.

Wu and colleagues presented a new sensor at IROS 2015, a meeting on intelligent robots and systems in Hamburg, Germany. The sensor, a sandwich of rubber and circuit boards, can measure adhesion forces — what a climbing robot uses to stick to walls. Theoretically, such a device could tell a bot if its feet were slipping so it could adjust its grip to hang on. And because the postage stamp–sized sensor is tough, it might actually survive life on little robot feet.

Wu has used a similar sort of sensor on an indoor, two-legged bot, the predecessor to the six-legged SAIL-R. The indoor bot can successfully distinguish between hard, slippery, grassy and sandy surfaces more than 90 percent of the time, Wu reported in IEEE Robotics and Automation Letters in July.

That could be enough to keep a bot from falling. On a patch of ice, for example, “it would say, ‘Uh-oh, this feels kind of slippery. I need to slow down to a walk,’ ” Wu says.

Ideally, Cutkosky says, robots should be covered with tactile sensors — just like human skin. But scientists are still figuring out how a machine would deal with the resulting deluge of information.

Smart skin
Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.

Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.

“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.

That simplified strategy may be a winner for robotic skin, too. Instead of sending every last bit of sensing data to a centralized robotic brain, the skin should do some of the computing itself, says Correll, who made the case for such “smart” materials in Science in 2015.

“When something interesting happens, [the skin] could report to the brain,” Correll says. Like human skin, artificial skin could take all the vibration info received from a nudge, or a tap to the shoulder, and translate it into a simpler message for the brain: “The skin could say, ‘I was tapped or rubbed or patted at this position,’ ” he says. That way, the robot’s brain doesn’t have to constantly process a flood of vibration data from the skin’s sensors.
It’s called distributed information processing. Correll and Colorado colleague Dana Hughes tested the idea with a stretchy square of rubbery skin mounted on the back of an industrial robot named Baxter. Throughout the skin, they placed 10 vibration sensors paired with 10 tiny computers. Then the team trained the computers to recognize different textures by rubbing patches of cotton, cardboard, sandpaper and other materials on the skin.

Their sensor/computer duo was able to distinguish between 15 textures about 70 percent of the time, Hughes and Correll reported in Bioinspiration & Biomimetics in 2015. And that’s with no centralized “brain” at all. That kind of touch discrimination brings the robotic skin a step closer to human skin. Making robotic parts with such sensing abilities “will make it much easier to build a dexterous, capable robot,” Correll says.

And with smart skin, robots could invest more brainpower in the big stuff, what humans begin learning at birth — how to use their own bodies.

Zip it
In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.

Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag. It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision.
You can’t just tell the robot to move its fingertips horizontally along the zipper, says Veronica Santos, a roboticist at UCLA. She and colleague Randall Hellman, a mechanical engineer, tried that. It’s too hard to predict how the bag will bend and flex. “It’s a constant moving target,” Santos says.

So the researchers let the robot learn how to close the bag itself.

First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes. Santos and Hellman also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.

Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. “This is the exciting part,” Santos says. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.

“If good stuff happens, it gets rewarded,” Santos says. When the bot holds the zipper near the center of the finger pads, she explains, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”

She and Hellman reported successful bag closing in April at the IEEE Haptics Symposium in Philadelphia. “The robot actually learned!” Santos says. And in a way that would have been hard to program.

It’s like teaching someone how to swing a tennis racket, she says. “I can tell you what you’re supposed to do, and I can tell you what it might feel like.” But to smash a ball across a net, “you’re going to have to do it and feel it yourself.”

Learning by doing may be the way to get robots to tackle all sorts of complicated tasks, or simple tasks in complicated situations. The crux is embodiment, Santos says, or the robot’s awareness that each of its actions brings an ever-shifting kaleidoscope of sensations.
Smooth operator
Awareness of the sights of surgery, and what to make of them, is instrumental for a human or machine trying to stitch up soft tissue.

Skin, muscle and organs are difficult to work with, says Kim, the surgeon at Children’s National Health System. “You’re trying to operate on shiny, glistening, blood-covered tissues,” he says. “They’re different shades of pink and they’re moving around all the time.”

Surgeons adjust their actions in response to what they see: a twisting bit of tissue, for example, or a spurt of fluid. Machines typically can’t gauge their location amid slippery organs or act fast when soft tissues tear. Robots needed an easier place to start. So, in 1992, surgery bots began working on bones: rigid material that tends to stay in one place.

In 2000, the U.S. Food and Drug Administration approved the first surgery robot for soft tissue: the da Vinci Surgical System, which looks like a prehistoric version of Kim’s surgery machine. Da Vinci is about as wide as a king-sized mattress and reaches 6 feet tall in places, with three mechanical arms tipped with disposable tools. Nearby, a bulky gray cart holds two silver hand controls for human surgeons.

In the cart’s backless seat, a surgeon would lean forward into a partially enclosed pod, hands gripping controls, feet working pipe organ–like pedals. To move da Vinci’s surgical tools, the surgeon would manipulate the controls, like those claw cranes kids use to pick up stuffed animals at arcades. “It’s what we call master/slave,” Kim says. “Essentially, the robot does exactly what the surgeon does.”

Da Vinci can manipulate tiny tools and keep incisions small, but it’s basically a power tool. “It has no awareness,” Kim says, “no intelligence.” The visual inputs of surgery are processed by human brains, not a computer.
Kim’s robot is a more enlightened beast. Named STAR, for Smart Tissue Autonomous Robot, the bot has preprogrammed surgical knowledge and hefty cameras that let it see and react to the environment. Recently, STAR stitched up soft tissue in a living animal — a first for a machine. The bot even outperformed human surgeons on some measures, Kim and colleagues reported in May in Science Translational Medicine.

Severed pig intestines sewed up in the lab by STAR tended to leak less than did intestines fixed by humans using da Vinci, laparoscopic tools or sewing by hand. When researchers held the intestines under water and inflated them with air, it took nearly double the pressure for the STAR-repaired tissue to spring a leak compared with intestines patched up by humans.

Kim credits STAR’s even stitches for the win. “It’s more consistent,” he says. “That’s the secret sauce.”

To keep track of its position on tissue, STAR uses near-infrared fluorescent imaging (like night vision goggles) to follow glowing dots marked by a person. To orient itself in space, STAR uses a 3-D camera with multiple lenses.

Then the robot taps into its surgical knowledge to figure out where to place a stitch. In the experiment reported in May, humans were still in the loop: STAR would await an OK if firing a stitch in a tricky spot, and an assistant helped keep the thread from tangling (a task commonly required in human-led surgeries too). Soon, STAR may be more self-sufficient. In late November, Kim plans to test a version of his machine with two robotic arms to replace the human assistant; he would also like to give STAR a few more superhuman senses, like gauging blood flow and detecting subsurface structures, like a submarine pinging an underwater shipwreck.

One day, Kim says, such technology could essentially put a world-class surgeon in every hospital, “available anyplace, anytime.”

Santos sees a future, 10 to 20 years from now perhaps, where humans and robots collaborate seamlessly — more like coworkers than master and slave. Robots will need all of their senses to take part, she says. They might not be the artificially intelligent androids of the movies, like Ex Machina’s cunning humanoid Ava. But like humans, intelligent, autonomous machines will have to learn the limits and capabilities of their bodies. They’ll have to learn how to move through the world on their own.

Protein mobs kill cells that most need those proteins to survive

Joining a gang doesn’t necessarily make a protein a killer, a new study suggests. This clumping gets dangerous only under certain circumstances.

A normally innocuous protein can be engineered to clump into fibers similar to those formed by proteins involved in Alzheimer’s, Parkinson’s and brain-wasting prion diseases such as Creutzfeldt-Jakob disease, researchers report in the Nov. 11 Science. Cells that rely on the protein’s normal function for survival die when the proteins glom together. But cells that don’t need the protein are unharmed by the gang activity, the researchers discovered. The finding may shed light on why clumping proteins that lead to degenerative brain diseases kill some cells, but leave others untouched.
Clumpy proteins known as prions or amyloids have been implicated in many nerve-cell-killing diseases (SN: 8/16/08, p. 20). Such proteins are twisted forms of normal proteins that can make other normal copies of the protein go rogue, too. The contorted proteins band together, killing brain cells and forming large clusters or plaques.

Scientists don’t fully understand why these mobs resort to violence or how they kill cells. Part of the difficulty in reconstructing the cells’ murder is that researchers aren’t sure what jobs, if any, many of the proteins normally perform (SN: 2/13/10, p. 17).

A team led by biophysicists Frederic Rousseau and Joost Schymkowitz of Catholic University Leuven in Belgium came up with a new way to dissect the problem. They started with a protein for which they already knew the function and engineered it to clump. That protein, vascular endothelial growth factor receptor 2, or VEGFR2, is involved in blood vessel growth. Rousseau and colleagues clipped off a portion of the protein that causes it to cluster with other proteins, creating an artificial amyloid.

Masses of the protein fragment, nicknamed vascin, could aggregate with and block the normal activity of VEGFR2, the researchers found. When the researchers added vascin to human umbilical vein cells grown in a lab dish, the cells died because VEGFR2 could no longer transmit hormone signals the cells need to survive. But human embryonic kidney cells and human bone cancer cells remained healthy. Those results suggest that some forms of clumpy proteins may not be generically toxic to cells, says biophysicist Priyanka Narayan of the Whitehead Institute for Biomedical Research in Cambridge, Mass. Instead, rogue clumpy proteins may target specific proteins and kill only cells that rely on those proteins for survival.

Those findings may also indicate that prion and amyloid proteins, such as Alzheimer’s nerve-killing amyloid-beta, normally play important roles in some brain cells. Those cells would be the ones vulnerable to attack from the clumpy proteins.
The newly engineered ready-to-rumble protein may open new ways to inactivate specific proteins in order to fight cancer and other diseases, says Salvador Ventura, a biophysicist at the Autonomous University of Barcelona. For instance, synthetic amyloids of overactive cancer proteins could gang up and shut down the problem protein, killing the tumor.

Artificial amyloids might also be used to screen potential drugs for anticlumping activity that could be used to combat brain-degenerating diseases, Rousseau suggests.

Dinosaurs may have used color as camouflage

The stories of dinosaurs’ lives may be written in fossilized pigments, but scientists are still wrangling over how to read them.

In September, paleontologists deduced a dinosaur’s habitat from remnants of melanosomes, pigment structures in the skin. Psittacosaurus, a speckled dinosaur about the size of a golden retriever, had a camouflaging pattern that may have helped it hide in forests, Jakob Vinther and colleagues say.
The dinosaur “was very much on the bottom of the food chain,” says Vinther, of the University of Bristol in England. “It needed to be inconspicuous.”
Identifying ancient pigments can open up a wide new world of dinosaur biology and answer all sorts of lifestyle questions, says zoologist Hannah Rowland of the University of Cambridge. “You might be able to take a fossil … and infer a dinosaur’s life history just from its pigment patterns,” she says. “That’s the most exciting thing.”

Not so fast, says paleontologist Mary Schweitzer of North Carolina State University in Raleigh. Evidence for ancient pigments can be ambiguous. In some cases, microscopic structures that appear to be melanosomes may actually be microbes, she says. “Both hypotheses remain viable until one is shot down with data.” Until then, she says, inferring dinosaur lifestyles from alleged ancient pigments is impossible.

Vinther’s work, published in the Sept. 26 Current Biology, is the latest in a long-simmering debate in the field of paleo color, the study of fossil pigments and what they can reveal about ancient animals. Disputes over his team’s findings and what’s needed to clearly identify fossilized melanosomes point to current pitfalls of the field.

But the promise is clear: Paleo color could paint a vivid picture of a dinosaur’s life, offering clues about behavior, habitat and evolution.

“This is a crucial new piece in the puzzle of how the past looked,” Vinther says.
Color me dino
Psittacosaurus (model shown) was a parrot-beaked herbivore about the size of a large dog. Researchers found signs of pigmentation (black specks) on its tail region, back leg and elsewhere that hint at its habitat.

Tap the image below to see signs of pigmentation from Psittacosaurus fossils.
A field emerges
Scientists have been puzzling over animals of the past for centuries, but eight years ago, paleontology got a wake-up call. That’s when Vinther and colleagues proposed that microscopic structures in a roughly 125-million-year-old fossil feather were actually a type of melanosome (SN: 8/2/08, p. 10). These pigment pouches rest inside pigment cells and, in this particular fossil feather, might have delivered a blackish hue, like a blackbird’s.

Scientists had noticed similar structures inside fossilized skin and feathers since the early 1980s. But people assumed that these structures were remnants of bacteria — perhaps decomposers that feasted on the dead animals, says paleontologist Martin Sander of the University of Bonn in Germany.

The new, colorful interpretation sparked a flurry of research, and scientists have since spotted what appear to be melanosomes in all kinds of fossilized animals. Paleontology, in fact, is now awash in colors and patterns. Pigment pods may have painted reddish-brown speckles on the face of a Late Jurassic theropod, brushed chestnut stripes on a long-tailed dino from China and made the plumage of a four-winged dinosaur called Microraptor iridescent. That shimmery dinosaur “probably had a weak, glossy iridescence all over its body,” says evolutionary biologist Matthew Shawkey of Ghent University in Belgium. His team deduced Microraptor’s color from the shape of its melanosomes.
Modern melanosomes generally carry a mixture of two melanin pigments: dark brown-black eumelanin and red-yellow pheomelanin. Scientists have linked color in mammals and birds to melanosome shape — a meatball shape for reddish brown hues, for example, and a sausage shape for darker colors.

In iridescent feathers, melanosomes tend to be even thinner, Shawkey says. Microraptor’s melanosomes looked like skinny sausages — similar to those seen in the feathers of modern crows and ravens, says Shawkey, who reported the findings with Vinther and colleagues in Science in 2012 (SN Online: 3/9/12).

Three years later, Vinther laid out the case for inferring color — and ancient histories — from fossilized pigments in a review in Bioessays. Not only can the distinctive shapes of melanosomes offer clues, he noted, but chemical tests can help detect the presence of melanin itself. Finding this pigment in fossils, he argued, puts the old bacteria hypothesis to rest.

Schweitzer and colleagues disagreed with Vinther’s take in a review published in Bioessays later in 2015. Researchers need to be cautious when deducing the hues of extinct animals, the scientists wrote. Any melanosome look-alikes in fossilized feathers or skin could actually be microbes.
After all, microbes are everywhere. “These animals died in an environment that was not sterile and free from microbes,” Schweitzer says. “Think about it. If you take a piece of chicken and throw it out in your backyard, how long does it take for microbes to overgrow that chicken?”

The tiny organisms are hardy, too. Both microbes and the sticky biofilms they form are preserved in the fossil record. And, Schweitzer says, microbes and melanosomes overlap completely in shape and size, which makes the two tough to tell apart. What’s more, some microbes actually make melanin themselves; detecting the pigment in a fossil is not a rock-solid sign that the ancient animal was black, brown or freckled.

It’s not that Schweitzer or Bioessays coauthor Johan Lindgren, a geologist at Lund University in Sweden, doubt that melanosomes can leave traces in the fossil record. The issue, Lindgren says, is that not all round structures you find are melanosomes.

Chemical tests could help distinguish the two. Bacteria, for example, leave behind traces that can be identified with pyrolysis gas chromatography-mass spectrometry. But that requires samples to be vaporized. “It can mean destroying much of what you are trying to study,” says geochemist Roy Wogelius of the University of Manchester in England. “So it’s not always possible.”

Vinther’s new work isn’t likely to settle the debate. In fact, people were arguing both sides in October at a meeting of the Society of Vertebrate Paleontology in Salt Lake City.

Arindam Roy, a Bristol colleague of Vinther’s, reported size differences between fossilized melanosomes and bacteria growing on decaying chicken feathers in the lab. Alison Moyer, an N.C. State colleague of Schweitzer’s, said that looks weren’t enough. Finding keratin, a protein that typically surrounds melanosomes, could serve as evidence for pigments in fossils.

From color to camouflage
The fossil described in Vinther’s new paper is “spectacular,” Schweitzer says. “It’s got skin all over the place. I can’t think of too many dinosaur specimens that are preserved like this.”

The dinosaur lies on its back, flattened in a slab of volcanic rock. Skin covers a completely intact skeleton, and dozens of long bristles poke from the tail. Psittacosaurus, an herbivore that lived some 120 million years ago, walked on two legs and would have reached about half a meter in height.
“It would have been a supercute animal,” Vinther says. “It’s got this wide face and looks a little bit like E.T.”

Black material speckles the dinosaur’s body, tail and face. Vinther believes the material is the ancient remains of pigment. His team examined samples chipped from the fossil and saw what he considers the telltale orbs of melanosomes — mostly impressions in the rock but also some microbodies, the 3-D structures themselves.

Based on the dinosaur’s pigment patterns, it would have had a dark back that faded to a lighter belly. That type of coloring, called countershading, shows up in animals from penguins to fish and may act as a form of camouflage. It lightens parts of the body typically in shadow, and darkens parts typically exposed to light. “If you want to hide, it makes sense to try and obliterate those shadows,” Rowland says.

Their prediction for diffuse light matched the model painted like Psittacosaurus. “It’s like what we see in forest-living animals,” Vinther says. “This thing was camouflaged.”
Lingering doubts
Going from fossil to forest may be more of a leap than a step, other scientists suggest.

Psittacosaurus’ skin very well may contain ancient pigments, Wogelius says. “I don’t think it’s a crazy idea.” But, he adds, of Vinther’s group: “I don’t think they’ve proved what they claim.”

Vinther’s team, for exampl e, used just four tiny fossil samples to extrapolate the coloring of the whole dinosaur. “I think it’s a bit of an overreach,” Wogelius says.

Schweitzer also notes that the specimen was varnished, presumably to protect the bones and soft tissues. It happened before Vinther and colleagues got their hands on the dinosaur and makes it impossible to perform the chemical tests that would bolster the claim for pigments. “Varnish is horribly destructive to fossils,” she says. “It totally ruins the specimen for other types of analysis.”

Vinther argues that his team has chemically analyzed other fossils and found evidence of melanin — not bacteria. The microbodies in those fossils look just like the ones in Psittacosaurus, he says.

Vinther’s team also saw evidence of just one kind of microbody, and it had a distinct round shape. If the structures were actually bacteria, he says, you’d expect to see a whole range of shapes and sizes. “Some of them would be shaped like corkscrews, some would have flagella, some would be humongous, some would be tiny.”

That’s the tricky part with bacteria, counters Lindgren. “In some cases you can have a huge consortium, but in other cases you can have one single type.”
Vinther’s interpretation has its supporters. “I was skeptical at first,” Sander says, “but now there’s been such an array of these little bodies that it’s pretty clear that at least some of them are not bacteria.” Despite some continuing controversy, Sander says many paleontologists now accept that micro­structures in fossils may be melanosomes.

Additional research, though, “would help the entire community,” he says, “so that there are no longer any lingering doubts.”

Along with chemical tests, Schweitzer suggests, researchers could try transmission electron microscopy, a technique that blasts an electron beam through a thinly sliced sample. With TEM, melanosomes appear as black blobs. Bacteria tend to look different — in some cases, more like fried eggs.

Shawkey, for one, is looking to chemistry. In a paper published online November 14 in Palaeontology, his team used a technique called Raman spectroscopy to help build a case for feather color in a bird that died some 120 million years ago. In the feathers, the researchers spotted the skinny sausages of iridescent melanosomes and chemical signs of the pigment eumelanin. Shawkey thinks the chemical evidence could help “head off any criticism that we might encounter.”

Working through the field’s snags, paleontologists might come together to fill in the hues and tints, and potentially the habits and habitats, of ancient animals that until recently had been known primarily by their bones.