Male peacock spiders know how to work their angles and find their light.
The arachnids, native to Australia, raise their derriere — or, more accurately, a flap on their hind end — skyward and shake it to attract females. Hairlike scales cover their bodies and produce the vibrant colorations that make peacock spiders so striking.
Doekele Stavenga of the University of Groningen in the Netherlands and his colleagues collected Maratus splendens peacock spiders from a park outside Sydney and zoomed in on those scales. Using microscopy, spectrometry and other techniques, the team found that the spiders’ red, yellow and cream scales rely on two pigments, 3-OH-kynurenine and xanthommatin, to reflect their colors. Even white scales contain low levels of pigment. Spines lining these scales scatter light randomly, giving them slightly different hues from different angles. Blue scales are an entirely different story. They’re transparent and pigment-free. Instead, the scales’ architecture reflects iridescent blue and purple hues. Each peapodlike scale is lined with tiny ridges on the outside and a layer of threadlike fibers on the inside. Fiber spacing may determine whether scales appear more blue or more purple.
Whether peacock spiders’ eyes can actually see these posterior patterns is an open question, Stavenga and his colleagues write in the August Journal of the Royal Society Interface. Given that other jumping spiders see at least three color ranges, it seems unlikely that such vivid come-hither choreography plays out in black and white.
WASHINGTON — A quantum internet could one day allow ultrasecure communications worldwide — but first, scientists must learn to tame unruly quantum particles such as electrons and photons. Several new developments in quantum technology, discussed at a recent meeting, have brought scientists closer to such mastery. Scientists are now teleporting particles’ properties across cities, satellite experiments are gearing up for quantum communications in space, and other scientists are developing ways to hold quantum information in memory.
In one feat, scientists achieved quantum teleportation across long distances through metropolitan areas. Quantum teleportation transfers quantum properties of one particle to another instantaneously. (It doesn’t allow for faster-than-light communication, though, because additional information has to be sent through standard channels.) Using a quantum network in Calgary, scientists teleported quantum states of photons over 6.2 kilometers. “It’s one step towards … achieving a global quantum network,” says Raju Valivarthi of the University of Calgary in Canada, who presented the result at the International Conference on Quantum Cryptography, QCrypt, on September 12.
A second group of scientists recently teleported photons using a quantum network spread through the city of Hefei, China. The two teams published their results online September 19 in Nature Photonics.
The weird properties of quantum particles make quantum communication possible: They can be in two places at once, or can have their properties linked through quantum entanglement. Tweak one particle in an entangled pair, and you can immediately seem to affect the other — what Albert Einstein called “spooky action at a distance.” Using quantum entanglement, people can securely exchange quantum keys — codes which can be used to encrypt top-secret messages. (SN: 11/20/10, p. 22). Any eavesdropper spying on the quantum key exchange would be detected, and the keys could be thrown out.
In practice, quantum particles can travel only so far. As photons are sent back and forth through optical fibers, many are lost along the way. But certain techniques can be used to expand their range. Quantum teleportation systems could be used to create quantum repeaters, which could be chained together to extend networks farther. But in order to function, quantum repeaters would also require a quantum memory to store entanglement until all the links in the chain are ready, says Ronald Hanson of Delft University of Technology in the Netherlands. Using a system based on quantum entanglement of electrons in diamond chips, Hanson’s team has developed a quantum memory by transferring the entanglement of the electrons to atomic nuclei for safekeeping, he reported at QCrypt on September 15.
Satellites could likewise allow quantum communication from afar. In August, China launched a satellite to test quantum communication from space; other groups are also studying techniques for sending delicate quantum information to space and back again (SN Online: 6/5/16), beaming up photons through free space instead of through optical fibers. “A free-space link is essential if you want to go to real long distance,” Giuseppe Vallone of the University of Padua in Italy said in a session at QCrypt on September 14. Particles can travel farther when sent via quantum satellite — due to the emptiness of space, fewer photons are absorbed or scattered away. Quantum networks could also benefit from processes that allow the use of scaled-down “quantum fingerprints” of data, to compare files without sending excess data, Feihu Xu of MIT reported at QCrypt on September 12. To check if two files are identical — for example, in order to find illegally pirated movies — one might compare all the bits in each file. But in fact, a subset of the bits — or a fingerprint — can do the job well. By harnessing the power of quantum mechanics, Xu and colleagues were able to compare messages using less information than classical methods require.
The quantum internet relies on the principles of quantum mechanics, which modern-day physicists generally accept — spooky action and all. In 2015, scientists finally confirmed that a key example of quantum weirdness is real, with a souped-up version of a test known as a Bell test, which closed loopholes that had weakened earlier Bell tests (SN: 9/19/15, p. 12). Loophole-free Bell tests were necessary to squelch any lingering doubts, but no one expected any surprises, says Charles Bennett of the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. “In a certain sense it’s beating a dead horse.”
But Bell tests have applications for the quantum internet as well — they are a foundation of an even more secure type of quantum communication, called device-independent quantum key distribution. Typically, secure exchanges of quantum keys require that the devices used are trustworthy, but device-independent methods do away with this requirement. This is “the most safe way of quantum communication,” says Hanson. “It does not make any assumptions about the internal workings of the device.”
This issue marks the second year that Science News has reached out to science notables and asked: Which up-and-coming scientist is making a splash? Whose work impresses you? Tell us about early- to mid-career scientists who have the potential to change their fields and the direction of science more generally.
This year, we expanded the pool of people we asked. We reached out to Nobel laureates again and added recently elected members of the National Academy of Sciences. That allowed us to consider shining lights from a much broader array of fields, from oceanography and astronomy to cognitive psychology. Another difference this year: We spent time face-to-face with many of those selected, to get a better sense of them both as scientists and as people. The result is the SN 10, a collection of stories not only about science, but also about making a life in science. They are stories of people succeeding because they have found what they love, be it working in the lab on new ways to probe molecular structures or staring up to the stars in search of glimmers of the early universe. In my interviews with chemist Phil Baran, I was struck by his drive to do things in new ways, whether devising chemical reactions or developing ideas about how to fund research. (If you can, he says, go private.) Laura Sanders, who met with neuroscientist Jeremy Freeman, was intrigued by his way of seeing a problem (siloed data that can’t be easily shared or analyzed) and figuring out solutions, even if those solutions were outside his area of expertise.
Of course, there are many ways to identify noteworthy scientists — and there’s plenty more fodder out there for future years. Our approach was to seek standouts, asking who deserved recognition for the skill of their methods, the insights of their thinking, the impacts of their research. Not all of the SN 10’s work has made headlines, but they all share something more important: They are participants in building the science of the future.
Notably, many of them do basic research. I think that’s because it’s the type of work that other scientists notice, even if it’s not always on the radar of the general public. But that’s where fundamental advances are often made, as scientists explore the unknown.
That edge of what’s known is where Science News likes to explore, too. Such as the bet-ending, head-scratching results from the Large Hadron Collider, which have failed to reveal the particles that the equations of supersymmetry predict. As Emily Conover reports in “Supersymmetry’s absence at LHC puzzles physicists,” that means that either the theory must be more complicated than originally thought, or not true, letting down those who looked to supersymmetry to help explain a few enduring mysteries, from the nature of dark matter to the mass of the Higgs boson.
Other mysteries may be closer to a solution, as Sanders reports in “New Alzheimer’s drug shows promise in small trial.” A new potential treatment for Alzheimer’s disease reduced amyloid-beta plaques in patients. It also showed hints of improving cognition. That’s standout news, a result built on decades of basic research by many, many bright young scientists.
In smart homes of the future, computers may identify inhabitants and cater to their needs using a tool already at hand: Wi-Fi. Human bodies partially block the radio waves that carry the wireless signal between router and computer. Differences in shape, size and even gait among household members yield different patterns in the received Wi-Fi signals. A computer can analyze the signals to distinguish dad from mom, according to a report posted online August 11 at arXiv.org.
Scientists built an algorithm that was nearly 95 percent accurate when attempting to discern two adults walking between a wireless router and a computer. For six people, accuracy fell to about 89 percent. Scientists tested the setup on men and women of various sizes, but it should work with children as well, says study coauthor Bin Guo of Northwestern Polytechnical University in Xi’an, China.
In a home rigged with Wi-Fi and a receiver, the system could eventually identify family members and tailor heating and lighting to their preferences — maybe even cue up a favorite playlist.
Nature’s rarest type of atomic nucleus is not giving up its secrets easily.
Scientists looking for the decay of an unusual form of the element tantalum, known as tantalum-180m, have come up empty-handed. Tantalum-180m’s hesitance to decay indicates that it has a half-life of at least 45 million billion years, Bjoern Lehnert and colleagues report online September 13 at arXiv.org. “The half-life is longer than a million times the age of the universe,” says Lehnert, a nuclear physicist at Carleton University in Ottawa. (Scientists estimate the universe’s age at 13.8 billion years.) Making up less than two ten-thousandths of a percent of the mass of the Earth’s crust, the metal tantalum is uncommon. And tantalum-180m is even harder to find. Only 0.01 percent of tantalum is found in this state, making it the rarest known long-lived nuclide, or variety of atom.
Tantalum-180m is a bit of an oddball. It is what’s known as an isomer — its nucleus exists in an “excited,” or high-energy, configuration. Normally, an excited nucleus would quickly drop to a lower energy state, emitting a photon — a particle of light — in the process. But tantalum-180m is “metastable” (hence the “m” in its name), meaning that it gets stuck in its high-energy state. Tantalum-180m is thought to decay by emitting or capturing an electron, morphing into another element — either tungsten or hafnium — in the process. But this decay has never been observed. Other unusual nuclides, such as those that decay by emitting two electrons simultaneously, can have even longer half-lives than tantalum-180m. But tantalum-180m is unique — it is the longest-lived isomer found in nature. “It’s a very interesting nucleus,” says nuclear physicist Eric Norman of the University of California, Berkeley, who was not involved with the study. Scientists don’t have a good understanding of such unusual decays, and a measurement of the half-life would help scientists pin down the details of the process and the nucleus’ structure. Lehnert and colleagues observed a sample of tantalum with a detector designed to catch photons emitted in the decay process. After running the experiment for 176 days, and adding in data from previous incarnations of the experiment, the team saw no evidence of decay. The half-life couldn’t be shorter than 45 million billion years, the scientists determined, or they would have seen some hint of the process. “They did a state-of-the-art measurement,” says Norman. “It’s a very difficult thing to see.”
The presence of tantalum-180m in nature is itself a bit of a mystery, too. The element-forging processes that occur in stars and supernovas seem to bypass the nuclide. “People don’t really understand how it is created at all,” says Lehnert.
Tantalum-180m is interesting as a potential energy source, says Norman, although “it’s kind of a crazy idea.” If scientists could find a way to tap the energy stored in the excited nucleus by causing it to decay, it might be useful for applications like nuclear lasers, he says.
Motors too small to see with the eye may soon have the power to drive innovations in chemistry, biology and computing. Three creators of such nanoscopic machines were honored October 5 with the Nobel Prize in chemistry.
Sharing the prize of 8 million Swedish kronor (about $930,000) equally are Jean-Pierre Sauvage, J. Fraser Stoddart and Bernard Feringa. “If you had to choose three people at the top of the field, that’s it. These are the men,” says James Tour, a na Recognition of the burgeoning field of molecular motors will draw more money and inspire children to become scientists, says Donna Nelson, an organic chemist at the University of Oklahoma in Norman and the president of the American Chemical Society. “It will benefit not only these three chemists, it will benefit the entire field of chemistry.” Chemists and physicists have envisioned molecular machines since at least the 1960s, but were never able to reliably produce complex structures. Then in 1983, Sauvage, of the University of Strasbourg in France, devised a method for making interlocking molecular rings, or catenanes. Sauvage’s molecular chain set the stage for the rest of the field (SN: 9/8/90, p. 149).
Stoddart, of Northwestern University in Evanston, Ill., improved the efficiency so that he could produce large quantities of molecular machines, starting in 1991 with rings clipped around a central axle. That structure is known as a rotaxane. He and colleagues learned to control the slide of the rings along the axle, making a simple molecular switch. Such switches could be used to create molecular computers or drug delivery systems. Stoddart showed in 2000 that it was possible to make molecular “muscles” using interlocking rings and axles. Stoddart and colleagues have since devised molecular elevators and pumps based on the same molecules. Feringa, of the University of Groningen in the Netherlands, ramped things up another notch in 1999 by building the first molecular motor. Things move so differently at the molecular scale that many researchers weren’t sure anyone could precisely control the motion of molecular motors, says R. Dean Astumian of the University of Maine in Orono. Feringa’s innovation was to devise asymmetric molecules that would spin in one direction when hit with a pulse of light.
Up to 50,000 of the motors could span the width of a human hair, says Tour. Alone, one of the spinning motors doesn’t pack much punch (SN: 2/7/04, p. 94), but harnessed together in large numbers the little motors can do big work, he says. Groups of the whirring motors powered by light can rotate a glass rod thousands of times their size and do other work on a macroscopic scale. Feringa also harnessed his motors into a four-wheel-drive “nanocar” (SN: 12/17/11, p. 8).
The process of making molecular machines has improved drastically over recent decades, thanks in large part to the work of the three newly christened laureates, says Rigoberto Advincula, a chemist at Case Western Reserve University in Cleveland. Scientists have a better understanding of how to construct molecules that more reliably bend, loop and connect to form shapes. “You don’t have tweezers to put them together,” he says. “You template the reaction so that the thread to goes through the ring. That then makes it easier for the two thread ends to meet each other.” New techniques have also allowed the production of more intricate shapes. Further development will bring these processes to even bigger scales, allowing for the design of molecular machines for everything from energy harvesting to building protein complexes, Advincula says. Such applications are still on the horizon and no one really knows what sorts of machines chemists can make from molecules yet. When people question Feringa about what his molecular motors can be used for, he “feels a bit like the Wright brothers” when people asked them after their first flight why they needed a flying machine, he said during a telephone call during the announcement of the prize. There are “endless opportunities,” including nanomachines that can seek and destroy tumor cells or deliver drugs to just the cells that need them, Feringa speculated.
Stoddart, who was born in Edinburgh and moved to the United States in 1997, applauded the Nobel committee for recognizing “a piece of chemistry that is extremely fundamental in its making and being.” Sauvage, in particular, created a new type of molecular bond in order to forge his chain, Stoddart said during a news conference. “New chemical compounds are probably several thousand a day worldwide,” he said. “New chemical reactions, well, maybe a dozen or two a month. Maybe I go over the top there. But new bonds, they are few and far between. They are really the blue moons. So I think that’s what’s being recognized, more than anything.”
Two trillion galaxies. That’s the latest estimate for the number of galaxies that live — or have lived — in the observable universe, researchers report online October 10 at arXiv.org. This updated headcount is roughly 10 times greater than previous estimates and suggests that there are a lot more galaxies out there for future telescopes to explore.
Hordes of relatively tiny galaxies, weighing as little as 1 million suns, are responsible for most of this tweak to the cosmic census. Astronomers haven’t directly seen these galaxies yet. Christopher Conselice, an astrophysicist at the University of Nottingham in England, and colleagues combined data from many ground- and space-based telescopes to look at how the number of galaxies in a typical volume of the universe has changed over much of cosmic history. They then calculated how many galaxies have come and gone in the universe.
The galactic population has dwindled over time, as most of those 2 trillion galaxies collided and merged to build larger galaxies such as the Milky Way, the researchers suggest. That’s in line with prevailing ideas about how massive galaxies have been assembled. Seeing many of these remote runts, however, is beyond the ability of even the next generation of telescopes. “We will have to wait at least several decades before even the majority of galaxies have basic imaging,” the researchers write.
Staphylococcal infections — especially rampant in hospitals and responsible for … some fatal disorders — may be virtually stamped out. Researchers … have extracted teichoic acid from the bacteria’s cell wall and used it to protect groups of mice from subsequent massive doses of virulent staph organisms. — Science News, October 29, 1966
UPDATE Staphylococcus aureus has not been conquered. As antibiotic resistance grows, the pressure is on to find ways to stop the deadly microbe. A vaccine that targets S. aureus’ various routes of infection is being tested in patients having back surgery. Ideally, doctors would use the vaccine to protect hospital patients and people with weakened immune systems. This vaccine is the furthest along among several others in development. Meanwhile, a natural antibiotic recently found in human noses may lead to drugs that target antibiotic-resistant staph (SN: 8/20/16, p. 7).
Crucial immune system proteins that make it harder for viruses to replicate might also help the attackers avoid detection, three new studies suggest. When faced with certain viruses, the proteins can set off a cascade of cell-to-cell messages that destroy antibody-producing immune cells. With those virus-fighting cells depleted, it’s easier for the invader to persist inside the host’s body.
The finding begins to explain a longstanding conundrum: how certain chronic viral infections can dodge the immune system’s antibody response, says David Brooks, an immunologist at the University of Toronto not involved in the research. The new studies, all published October 21 in Science Immunology, pin the blame on the same set of proteins: type 1 interferons. Normally, type 1 interferons protect the body from viral siege. They snap into action when a virus infects cells, helping to activate other parts of the immune system. And they make cells less hospitable to viruses so that the foreign invaders can’t replicate as easily.
But in three separate studies, scientists tracked mice’s immune response when infected with lymphocytic choriomeningitis virus, or LCMV. In each case, type 1 interferon proteins masterminded the loss of B cells, which produce antibodies specific to the virus that is being fought. Normally, those antibodies latch on to the target virus, flagging it for destruction by other immune cells called T cells. With fewer B cells, the virus can evade capture for longer.
The proteins’ response “is driving the immune system to do something bad to itself,” says Dorian McGavern, an immunologist at the National Institute of Neurological Disorders and Stroke in Bethesda, Md., who led one of the studies.
The interferon proteins didn’t directly destroy the B cells; they worked through middlemen instead. These intermediaries differed depending on factors including the site of infection and how much of the virus the mice received. T cells were one intermediary. McGavern and his colleagues filmed T cells actively destroying their B cell compatriots under the direction of the interferon proteins. When the scientists deleted those T cells, the B cells didn’t die off even though the interferons were still hanging around. Another study found that the interferons were sending messages not just through T cells, but via a cadre of other immune cells, too. Those messages told B cells to morph into cells that rapidly produce antibodies for the virus. But those cells die off within a few days instead of mounting a longer-term defense.
That strategy could be helpful for a short-term infection, but less successful against a chronic one, says Daniel Pinschewer, a virologist at the University of Basel in Switzerland who led that study. Throwing the entire defense arsenal at the virus all at once leaves the immune system shorthanded later on.
But interferon activity could prolong even short-term viral infections, a third study showed. There, scientists injected lower doses of LCMV into mice’s footpads and used high-powered microscopes to watch the infection play out in the lymph nodes. In this case, the interferon stifled B cells by working through inflammatory monocytes, white blood cells that rush to infection sites.
“The net effect is beneficial for the virus,” says Matteo Iannacone, an immunologist at San Raffaele Scientific Institute in Milan who led the third study. Sticking around even a few days longer gives the virus more time to spread to new hosts.
Since all three studies looked at the same virus, it’s not yet clear whether the mechanism extends to other viral infections. That’s a target for future research, Iannacone says. But Brooks thinks it’s likely that other viruses that dampen antibody response (like HIV and hepatitis C) could also be exploiting type 1 interferons.
In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.
The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.
Or something close to it. For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.
Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.
“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.
The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly. Such machines of the future are a far cry from that shiny white surgery robot in the D.C. lab, essentially an arm atop a cart. But today’s fledgling sensing robots mark the slow awakening of machines to the world around them, and themselves.
“By adding just a little bit of awareness to the machine,” says pediatric surgeon Peter Kim of the children’s hospital, “there’s a huge amount of benefit to gain.”
Born to run The pint-size machine running around Stanford’s campus doesn’t look especially self-aware. It’s a rugged sort of robot, with stacked circuit boards and bundles of colorful wires loaded on its back. It scampers over grass, gravel, asphalt — any surface roboticist Alice Wu can find.
For weeks this summer, Wu took the traveling bot outside, placed it on the ground, and then, “I let her run,” she says. The bot isn’t that fast (its top speed is about a half a meter per second), and it doesn’t go far, but Wu is trying to give it something special: a sense of touch. Wu calls the bot SAIL-R, for Sensorized Adaptive Intelligence Legged Robot.
Fixed to each of its six C-shaped legs are tactile sensors that can tell how hard the robot hits the ground. Most robots don’t have tactile sensing on their feet, Wu says. “When I first got into this, I thought that was crazy. So much effort is focused on hands and arms.” But feet make contact with the world too.
Feeling the ground, in fact, is crucial for walking. Most people tailor their gait to different surfaces without even thinking, feet pounding the ground on a run over grass, or slowing down on a street glazed with ice. Wu wants to make robots that, like humans, sense the surface they’re on and adjust their walk accordingly.
Walking robots have already ventured out into the world: Last year, a competition sponsored by DARPA, the Department of Defense agency that funds advanced research, showcased a lineup of semiautonomous robots that walked over rubble and even climbed stairs (SN: 12/13/14, p. 16). But they didn’t do it on their own; hidden away in control rooms, human operators pulled the strings.
One day, Wu says, machines could feel the ground and learn for themselves the most efficient way to walk. But that’s a tall order. For one, researchers can’t simply glue the delicate sensors designed for a robot’s hands onto its feet. “The feet are literally whacking the sensor against the ground very, very hard,” Wu says. “It’s unforgiving contact.”
That’s the challenge with tactile sensing in general, says Cutkosky, Wu’s adviser at Stanford. Scientists have to build sensors that are tough, that can survive impact and abrasion and bending and water. It’s one reason physical intelligence has advanced so slowly, he says.
“You can’t just feed a supercomputer thousands of training examples,” Cutkosky says, the way AlphaGo learned how to play Go (SN Online: 3/15/16). “You actually have to build things that interact with the world.” Cutkosky would know. His lab is famous for building such machines: tiny “microTugs” that can team up, antlike, to pull a car, and a gecko-inspired “Stickybot” that climbs walls. Tactile sensing could make these and other robots smarter.
Wu and colleagues presented a new sensor at IROS 2015, a meeting on intelligent robots and systems in Hamburg, Germany. The sensor, a sandwich of rubber and circuit boards, can measure adhesion forces — what a climbing robot uses to stick to walls. Theoretically, such a device could tell a bot if its feet were slipping so it could adjust its grip to hang on. And because the postage stamp–sized sensor is tough, it might actually survive life on little robot feet.
Wu has used a similar sort of sensor on an indoor, two-legged bot, the predecessor to the six-legged SAIL-R. The indoor bot can successfully distinguish between hard, slippery, grassy and sandy surfaces more than 90 percent of the time, Wu reported in IEEE Robotics and Automation Letters in July.
That could be enough to keep a bot from falling. On a patch of ice, for example, “it would say, ‘Uh-oh, this feels kind of slippery. I need to slow down to a walk,’ ” Wu says.
Ideally, Cutkosky says, robots should be covered with tactile sensors — just like human skin. But scientists are still figuring out how a machine would deal with the resulting deluge of information.
Smart skin Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.
Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.
“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.
That simplified strategy may be a winner for robotic skin, too. Instead of sending every last bit of sensing data to a centralized robotic brain, the skin should do some of the computing itself, says Correll, who made the case for such “smart” materials in Science in 2015.
“When something interesting happens, [the skin] could report to the brain,” Correll says. Like human skin, artificial skin could take all the vibration info received from a nudge, or a tap to the shoulder, and translate it into a simpler message for the brain: “The skin could say, ‘I was tapped or rubbed or patted at this position,’ ” he says. That way, the robot’s brain doesn’t have to constantly process a flood of vibration data from the skin’s sensors. It’s called distributed information processing. Correll and Colorado colleague Dana Hughes tested the idea with a stretchy square of rubbery skin mounted on the back of an industrial robot named Baxter. Throughout the skin, they placed 10 vibration sensors paired with 10 tiny computers. Then the team trained the computers to recognize different textures by rubbing patches of cotton, cardboard, sandpaper and other materials on the skin.
Their sensor/computer duo was able to distinguish between 15 textures about 70 percent of the time, Hughes and Correll reported in Bioinspiration & Biomimetics in 2015. And that’s with no centralized “brain” at all. That kind of touch discrimination brings the robotic skin a step closer to human skin. Making robotic parts with such sensing abilities “will make it much easier to build a dexterous, capable robot,” Correll says.
And with smart skin, robots could invest more brainpower in the big stuff, what humans begin learning at birth — how to use their own bodies.
Zip it In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.
Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag. It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision. You can’t just tell the robot to move its fingertips horizontally along the zipper, says Veronica Santos, a roboticist at UCLA. She and colleague Randall Hellman, a mechanical engineer, tried that. It’s too hard to predict how the bag will bend and flex. “It’s a constant moving target,” Santos says.
So the researchers let the robot learn how to close the bag itself.
First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes. Santos and Hellman also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.
Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. “This is the exciting part,” Santos says. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.
“If good stuff happens, it gets rewarded,” Santos says. When the bot holds the zipper near the center of the finger pads, she explains, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”
She and Hellman reported successful bag closing in April at the IEEE Haptics Symposium in Philadelphia. “The robot actually learned!” Santos says. And in a way that would have been hard to program.
It’s like teaching someone how to swing a tennis racket, she says. “I can tell you what you’re supposed to do, and I can tell you what it might feel like.” But to smash a ball across a net, “you’re going to have to do it and feel it yourself.”
Learning by doing may be the way to get robots to tackle all sorts of complicated tasks, or simple tasks in complicated situations. The crux is embodiment, Santos says, or the robot’s awareness that each of its actions brings an ever-shifting kaleidoscope of sensations. Smooth operator Awareness of the sights of surgery, and what to make of them, is instrumental for a human or machine trying to stitch up soft tissue.
Skin, muscle and organs are difficult to work with, says Kim, the surgeon at Children’s National Health System. “You’re trying to operate on shiny, glistening, blood-covered tissues,” he says. “They’re different shades of pink and they’re moving around all the time.”
Surgeons adjust their actions in response to what they see: a twisting bit of tissue, for example, or a spurt of fluid. Machines typically can’t gauge their location amid slippery organs or act fast when soft tissues tear. Robots needed an easier place to start. So, in 1992, surgery bots began working on bones: rigid material that tends to stay in one place.
In 2000, the U.S. Food and Drug Administration approved the first surgery robot for soft tissue: the da Vinci Surgical System, which looks like a prehistoric version of Kim’s surgery machine. Da Vinci is about as wide as a king-sized mattress and reaches 6 feet tall in places, with three mechanical arms tipped with disposable tools. Nearby, a bulky gray cart holds two silver hand controls for human surgeons.
In the cart’s backless seat, a surgeon would lean forward into a partially enclosed pod, hands gripping controls, feet working pipe organ–like pedals. To move da Vinci’s surgical tools, the surgeon would manipulate the controls, like those claw cranes kids use to pick up stuffed animals at arcades. “It’s what we call master/slave,” Kim says. “Essentially, the robot does exactly what the surgeon does.”
Da Vinci can manipulate tiny tools and keep incisions small, but it’s basically a power tool. “It has no awareness,” Kim says, “no intelligence.” The visual inputs of surgery are processed by human brains, not a computer. Kim’s robot is a more enlightened beast. Named STAR, for Smart Tissue Autonomous Robot, the bot has preprogrammed surgical knowledge and hefty cameras that let it see and react to the environment. Recently, STAR stitched up soft tissue in a living animal — a first for a machine. The bot even outperformed human surgeons on some measures, Kim and colleagues reported in May in Science Translational Medicine.
Severed pig intestines sewed up in the lab by STAR tended to leak less than did intestines fixed by humans using da Vinci, laparoscopic tools or sewing by hand. When researchers held the intestines under water and inflated them with air, it took nearly double the pressure for the STAR-repaired tissue to spring a leak compared with intestines patched up by humans.
Kim credits STAR’s even stitches for the win. “It’s more consistent,” he says. “That’s the secret sauce.”
To keep track of its position on tissue, STAR uses near-infrared fluorescent imaging (like night vision goggles) to follow glowing dots marked by a person. To orient itself in space, STAR uses a 3-D camera with multiple lenses.
Then the robot taps into its surgical knowledge to figure out where to place a stitch. In the experiment reported in May, humans were still in the loop: STAR would await an OK if firing a stitch in a tricky spot, and an assistant helped keep the thread from tangling (a task commonly required in human-led surgeries too). Soon, STAR may be more self-sufficient. In late November, Kim plans to test a version of his machine with two robotic arms to replace the human assistant; he would also like to give STAR a few more superhuman senses, like gauging blood flow and detecting subsurface structures, like a submarine pinging an underwater shipwreck.
One day, Kim says, such technology could essentially put a world-class surgeon in every hospital, “available anyplace, anytime.”
Santos sees a future, 10 to 20 years from now perhaps, where humans and robots collaborate seamlessly — more like coworkers than master and slave. Robots will need all of their senses to take part, she says. They might not be the artificially intelligent androids of the movies, like Ex Machina’s cunning humanoid Ava. But like humans, intelligent, autonomous machines will have to learn the limits and capabilities of their bodies. They’ll have to learn how to move through the world on their own.