Staphylococcal infections — especially rampant in hospitals and responsible for … some fatal disorders — may be virtually stamped out. Researchers … have extracted teichoic acid from the bacteria’s cell wall and used it to protect groups of mice from subsequent massive doses of virulent staph organisms. — Science News, October 29, 1966
UPDATE Staphylococcus aureus has not been conquered. As antibiotic resistance grows, the pressure is on to find ways to stop the deadly microbe. A vaccine that targets S. aureus’ various routes of infection is being tested in patients having back surgery. Ideally, doctors would use the vaccine to protect hospital patients and people with weakened immune systems. This vaccine is the furthest along among several others in development. Meanwhile, a natural antibiotic recently found in human noses may lead to drugs that target antibiotic-resistant staph (SN: 8/20/16, p. 7).
Crucial immune system proteins that make it harder for viruses to replicate might also help the attackers avoid detection, three new studies suggest. When faced with certain viruses, the proteins can set off a cascade of cell-to-cell messages that destroy antibody-producing immune cells. With those virus-fighting cells depleted, it’s easier for the invader to persist inside the host’s body.
The finding begins to explain a longstanding conundrum: how certain chronic viral infections can dodge the immune system’s antibody response, says David Brooks, an immunologist at the University of Toronto not involved in the research. The new studies, all published October 21 in Science Immunology, pin the blame on the same set of proteins: type 1 interferons. Normally, type 1 interferons protect the body from viral siege. They snap into action when a virus infects cells, helping to activate other parts of the immune system. And they make cells less hospitable to viruses so that the foreign invaders can’t replicate as easily.
But in three separate studies, scientists tracked mice’s immune response when infected with lymphocytic choriomeningitis virus, or LCMV. In each case, type 1 interferon proteins masterminded the loss of B cells, which produce antibodies specific to the virus that is being fought. Normally, those antibodies latch on to the target virus, flagging it for destruction by other immune cells called T cells. With fewer B cells, the virus can evade capture for longer.
The proteins’ response “is driving the immune system to do something bad to itself,” says Dorian McGavern, an immunologist at the National Institute of Neurological Disorders and Stroke in Bethesda, Md., who led one of the studies.
The interferon proteins didn’t directly destroy the B cells; they worked through middlemen instead. These intermediaries differed depending on factors including the site of infection and how much of the virus the mice received. T cells were one intermediary. McGavern and his colleagues filmed T cells actively destroying their B cell compatriots under the direction of the interferon proteins. When the scientists deleted those T cells, the B cells didn’t die off even though the interferons were still hanging around. Another study found that the interferons were sending messages not just through T cells, but via a cadre of other immune cells, too. Those messages told B cells to morph into cells that rapidly produce antibodies for the virus. But those cells die off within a few days instead of mounting a longer-term defense.
That strategy could be helpful for a short-term infection, but less successful against a chronic one, says Daniel Pinschewer, a virologist at the University of Basel in Switzerland who led that study. Throwing the entire defense arsenal at the virus all at once leaves the immune system shorthanded later on.
But interferon activity could prolong even short-term viral infections, a third study showed. There, scientists injected lower doses of LCMV into mice’s footpads and used high-powered microscopes to watch the infection play out in the lymph nodes. In this case, the interferon stifled B cells by working through inflammatory monocytes, white blood cells that rush to infection sites.
“The net effect is beneficial for the virus,” says Matteo Iannacone, an immunologist at San Raffaele Scientific Institute in Milan who led the third study. Sticking around even a few days longer gives the virus more time to spread to new hosts.
Since all three studies looked at the same virus, it’s not yet clear whether the mechanism extends to other viral infections. That’s a target for future research, Iannacone says. But Brooks thinks it’s likely that other viruses that dampen antibody response (like HIV and hepatitis C) could also be exploiting type 1 interferons.
In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.
The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.
Or something close to it. For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.
Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.
“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.
The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly. Such machines of the future are a far cry from that shiny white surgery robot in the D.C. lab, essentially an arm atop a cart. But today’s fledgling sensing robots mark the slow awakening of machines to the world around them, and themselves.
“By adding just a little bit of awareness to the machine,” says pediatric surgeon Peter Kim of the children’s hospital, “there’s a huge amount of benefit to gain.”
Born to run The pint-size machine running around Stanford’s campus doesn’t look especially self-aware. It’s a rugged sort of robot, with stacked circuit boards and bundles of colorful wires loaded on its back. It scampers over grass, gravel, asphalt — any surface roboticist Alice Wu can find.
For weeks this summer, Wu took the traveling bot outside, placed it on the ground, and then, “I let her run,” she says. The bot isn’t that fast (its top speed is about a half a meter per second), and it doesn’t go far, but Wu is trying to give it something special: a sense of touch. Wu calls the bot SAIL-R, for Sensorized Adaptive Intelligence Legged Robot.
Fixed to each of its six C-shaped legs are tactile sensors that can tell how hard the robot hits the ground. Most robots don’t have tactile sensing on their feet, Wu says. “When I first got into this, I thought that was crazy. So much effort is focused on hands and arms.” But feet make contact with the world too.
Feeling the ground, in fact, is crucial for walking. Most people tailor their gait to different surfaces without even thinking, feet pounding the ground on a run over grass, or slowing down on a street glazed with ice. Wu wants to make robots that, like humans, sense the surface they’re on and adjust their walk accordingly.
Walking robots have already ventured out into the world: Last year, a competition sponsored by DARPA, the Department of Defense agency that funds advanced research, showcased a lineup of semiautonomous robots that walked over rubble and even climbed stairs (SN: 12/13/14, p. 16). But they didn’t do it on their own; hidden away in control rooms, human operators pulled the strings.
One day, Wu says, machines could feel the ground and learn for themselves the most efficient way to walk. But that’s a tall order. For one, researchers can’t simply glue the delicate sensors designed for a robot’s hands onto its feet. “The feet are literally whacking the sensor against the ground very, very hard,” Wu says. “It’s unforgiving contact.”
That’s the challenge with tactile sensing in general, says Cutkosky, Wu’s adviser at Stanford. Scientists have to build sensors that are tough, that can survive impact and abrasion and bending and water. It’s one reason physical intelligence has advanced so slowly, he says.
“You can’t just feed a supercomputer thousands of training examples,” Cutkosky says, the way AlphaGo learned how to play Go (SN Online: 3/15/16). “You actually have to build things that interact with the world.” Cutkosky would know. His lab is famous for building such machines: tiny “microTugs” that can team up, antlike, to pull a car, and a gecko-inspired “Stickybot” that climbs walls. Tactile sensing could make these and other robots smarter.
Wu and colleagues presented a new sensor at IROS 2015, a meeting on intelligent robots and systems in Hamburg, Germany. The sensor, a sandwich of rubber and circuit boards, can measure adhesion forces — what a climbing robot uses to stick to walls. Theoretically, such a device could tell a bot if its feet were slipping so it could adjust its grip to hang on. And because the postage stamp–sized sensor is tough, it might actually survive life on little robot feet.
Wu has used a similar sort of sensor on an indoor, two-legged bot, the predecessor to the six-legged SAIL-R. The indoor bot can successfully distinguish between hard, slippery, grassy and sandy surfaces more than 90 percent of the time, Wu reported in IEEE Robotics and Automation Letters in July.
That could be enough to keep a bot from falling. On a patch of ice, for example, “it would say, ‘Uh-oh, this feels kind of slippery. I need to slow down to a walk,’ ” Wu says.
Ideally, Cutkosky says, robots should be covered with tactile sensors — just like human skin. But scientists are still figuring out how a machine would deal with the resulting deluge of information.
Smart skin Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.
Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.
“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.
That simplified strategy may be a winner for robotic skin, too. Instead of sending every last bit of sensing data to a centralized robotic brain, the skin should do some of the computing itself, says Correll, who made the case for such “smart” materials in Science in 2015.
“When something interesting happens, [the skin] could report to the brain,” Correll says. Like human skin, artificial skin could take all the vibration info received from a nudge, or a tap to the shoulder, and translate it into a simpler message for the brain: “The skin could say, ‘I was tapped or rubbed or patted at this position,’ ” he says. That way, the robot’s brain doesn’t have to constantly process a flood of vibration data from the skin’s sensors. It’s called distributed information processing. Correll and Colorado colleague Dana Hughes tested the idea with a stretchy square of rubbery skin mounted on the back of an industrial robot named Baxter. Throughout the skin, they placed 10 vibration sensors paired with 10 tiny computers. Then the team trained the computers to recognize different textures by rubbing patches of cotton, cardboard, sandpaper and other materials on the skin.
Their sensor/computer duo was able to distinguish between 15 textures about 70 percent of the time, Hughes and Correll reported in Bioinspiration & Biomimetics in 2015. And that’s with no centralized “brain” at all. That kind of touch discrimination brings the robotic skin a step closer to human skin. Making robotic parts with such sensing abilities “will make it much easier to build a dexterous, capable robot,” Correll says.
And with smart skin, robots could invest more brainpower in the big stuff, what humans begin learning at birth — how to use their own bodies.
Zip it In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.
Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag. It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision. You can’t just tell the robot to move its fingertips horizontally along the zipper, says Veronica Santos, a roboticist at UCLA. She and colleague Randall Hellman, a mechanical engineer, tried that. It’s too hard to predict how the bag will bend and flex. “It’s a constant moving target,” Santos says.
So the researchers let the robot learn how to close the bag itself.
First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes. Santos and Hellman also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.
Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. “This is the exciting part,” Santos says. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.
“If good stuff happens, it gets rewarded,” Santos says. When the bot holds the zipper near the center of the finger pads, she explains, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”
She and Hellman reported successful bag closing in April at the IEEE Haptics Symposium in Philadelphia. “The robot actually learned!” Santos says. And in a way that would have been hard to program.
It’s like teaching someone how to swing a tennis racket, she says. “I can tell you what you’re supposed to do, and I can tell you what it might feel like.” But to smash a ball across a net, “you’re going to have to do it and feel it yourself.”
Learning by doing may be the way to get robots to tackle all sorts of complicated tasks, or simple tasks in complicated situations. The crux is embodiment, Santos says, or the robot’s awareness that each of its actions brings an ever-shifting kaleidoscope of sensations. Smooth operator Awareness of the sights of surgery, and what to make of them, is instrumental for a human or machine trying to stitch up soft tissue.
Skin, muscle and organs are difficult to work with, says Kim, the surgeon at Children’s National Health System. “You’re trying to operate on shiny, glistening, blood-covered tissues,” he says. “They’re different shades of pink and they’re moving around all the time.”
Surgeons adjust their actions in response to what they see: a twisting bit of tissue, for example, or a spurt of fluid. Machines typically can’t gauge their location amid slippery organs or act fast when soft tissues tear. Robots needed an easier place to start. So, in 1992, surgery bots began working on bones: rigid material that tends to stay in one place.
In 2000, the U.S. Food and Drug Administration approved the first surgery robot for soft tissue: the da Vinci Surgical System, which looks like a prehistoric version of Kim’s surgery machine. Da Vinci is about as wide as a king-sized mattress and reaches 6 feet tall in places, with three mechanical arms tipped with disposable tools. Nearby, a bulky gray cart holds two silver hand controls for human surgeons.
In the cart’s backless seat, a surgeon would lean forward into a partially enclosed pod, hands gripping controls, feet working pipe organ–like pedals. To move da Vinci’s surgical tools, the surgeon would manipulate the controls, like those claw cranes kids use to pick up stuffed animals at arcades. “It’s what we call master/slave,” Kim says. “Essentially, the robot does exactly what the surgeon does.”
Da Vinci can manipulate tiny tools and keep incisions small, but it’s basically a power tool. “It has no awareness,” Kim says, “no intelligence.” The visual inputs of surgery are processed by human brains, not a computer. Kim’s robot is a more enlightened beast. Named STAR, for Smart Tissue Autonomous Robot, the bot has preprogrammed surgical knowledge and hefty cameras that let it see and react to the environment. Recently, STAR stitched up soft tissue in a living animal — a first for a machine. The bot even outperformed human surgeons on some measures, Kim and colleagues reported in May in Science Translational Medicine.
Severed pig intestines sewed up in the lab by STAR tended to leak less than did intestines fixed by humans using da Vinci, laparoscopic tools or sewing by hand. When researchers held the intestines under water and inflated them with air, it took nearly double the pressure for the STAR-repaired tissue to spring a leak compared with intestines patched up by humans.
Kim credits STAR’s even stitches for the win. “It’s more consistent,” he says. “That’s the secret sauce.”
To keep track of its position on tissue, STAR uses near-infrared fluorescent imaging (like night vision goggles) to follow glowing dots marked by a person. To orient itself in space, STAR uses a 3-D camera with multiple lenses.
Then the robot taps into its surgical knowledge to figure out where to place a stitch. In the experiment reported in May, humans were still in the loop: STAR would await an OK if firing a stitch in a tricky spot, and an assistant helped keep the thread from tangling (a task commonly required in human-led surgeries too). Soon, STAR may be more self-sufficient. In late November, Kim plans to test a version of his machine with two robotic arms to replace the human assistant; he would also like to give STAR a few more superhuman senses, like gauging blood flow and detecting subsurface structures, like a submarine pinging an underwater shipwreck.
One day, Kim says, such technology could essentially put a world-class surgeon in every hospital, “available anyplace, anytime.”
Santos sees a future, 10 to 20 years from now perhaps, where humans and robots collaborate seamlessly — more like coworkers than master and slave. Robots will need all of their senses to take part, she says. They might not be the artificially intelligent androids of the movies, like Ex Machina’s cunning humanoid Ava. But like humans, intelligent, autonomous machines will have to learn the limits and capabilities of their bodies. They’ll have to learn how to move through the world on their own.
Joining a gang doesn’t necessarily make a protein a killer, a new study suggests. This clumping gets dangerous only under certain circumstances.
A normally innocuous protein can be engineered to clump into fibers similar to those formed by proteins involved in Alzheimer’s, Parkinson’s and brain-wasting prion diseases such as Creutzfeldt-Jakob disease, researchers report in the Nov. 11 Science. Cells that rely on the protein’s normal function for survival die when the proteins glom together. But cells that don’t need the protein are unharmed by the gang activity, the researchers discovered. The finding may shed light on why clumping proteins that lead to degenerative brain diseases kill some cells, but leave others untouched. Clumpy proteins known as prions or amyloids have been implicated in many nerve-cell-killing diseases (SN: 8/16/08, p. 20). Such proteins are twisted forms of normal proteins that can make other normal copies of the protein go rogue, too. The contorted proteins band together, killing brain cells and forming large clusters or plaques.
Scientists don’t fully understand why these mobs resort to violence or how they kill cells. Part of the difficulty in reconstructing the cells’ murder is that researchers aren’t sure what jobs, if any, many of the proteins normally perform (SN: 2/13/10, p. 17).
A team led by biophysicists Frederic Rousseau and Joost Schymkowitz of Catholic University Leuven in Belgium came up with a new way to dissect the problem. They started with a protein for which they already knew the function and engineered it to clump. That protein, vascular endothelial growth factor receptor 2, or VEGFR2, is involved in blood vessel growth. Rousseau and colleagues clipped off a portion of the protein that causes it to cluster with other proteins, creating an artificial amyloid.
Masses of the protein fragment, nicknamed vascin, could aggregate with and block the normal activity of VEGFR2, the researchers found. When the researchers added vascin to human umbilical vein cells grown in a lab dish, the cells died because VEGFR2 could no longer transmit hormone signals the cells need to survive. But human embryonic kidney cells and human bone cancer cells remained healthy. Those results suggest that some forms of clumpy proteins may not be generically toxic to cells, says biophysicist Priyanka Narayan of the Whitehead Institute for Biomedical Research in Cambridge, Mass. Instead, rogue clumpy proteins may target specific proteins and kill only cells that rely on those proteins for survival.
Those findings may also indicate that prion and amyloid proteins, such as Alzheimer’s nerve-killing amyloid-beta, normally play important roles in some brain cells. Those cells would be the ones vulnerable to attack from the clumpy proteins. The newly engineered ready-to-rumble protein may open new ways to inactivate specific proteins in order to fight cancer and other diseases, says Salvador Ventura, a biophysicist at the Autonomous University of Barcelona. For instance, synthetic amyloids of overactive cancer proteins could gang up and shut down the problem protein, killing the tumor.
Artificial amyloids might also be used to screen potential drugs for anticlumping activity that could be used to combat brain-degenerating diseases, Rousseau suggests.
The stories of dinosaurs’ lives may be written in fossilized pigments, but scientists are still wrangling over how to read them.
In September, paleontologists deduced a dinosaur’s habitat from remnants of melanosomes, pigment structures in the skin. Psittacosaurus, a speckled dinosaur about the size of a golden retriever, had a camouflaging pattern that may have helped it hide in forests, Jakob Vinther and colleagues say. The dinosaur “was very much on the bottom of the food chain,” says Vinther, of the University of Bristol in England. “It needed to be inconspicuous.” Identifying ancient pigments can open up a wide new world of dinosaur biology and answer all sorts of lifestyle questions, says zoologist Hannah Rowland of the University of Cambridge. “You might be able to take a fossil … and infer a dinosaur’s life history just from its pigment patterns,” she says. “That’s the most exciting thing.”
Not so fast, says paleontologist Mary Schweitzer of North Carolina State University in Raleigh. Evidence for ancient pigments can be ambiguous. In some cases, microscopic structures that appear to be melanosomes may actually be microbes, she says. “Both hypotheses remain viable until one is shot down with data.” Until then, she says, inferring dinosaur lifestyles from alleged ancient pigments is impossible.
Vinther’s work, published in the Sept. 26 Current Biology, is the latest in a long-simmering debate in the field of paleo color, the study of fossil pigments and what they can reveal about ancient animals. Disputes over his team’s findings and what’s needed to clearly identify fossilized melanosomes point to current pitfalls of the field.
But the promise is clear: Paleo color could paint a vivid picture of a dinosaur’s life, offering clues about behavior, habitat and evolution.
“This is a crucial new piece in the puzzle of how the past looked,” Vinther says. Color me dino Psittacosaurus (model shown) was a parrot-beaked herbivore about the size of a large dog. Researchers found signs of pigmentation (black specks) on its tail region, back leg and elsewhere that hint at its habitat.
Tap the image below to see signs of pigmentation from Psittacosaurus fossils. A field emerges Scientists have been puzzling over animals of the past for centuries, but eight years ago, paleontology got a wake-up call. That’s when Vinther and colleagues proposed that microscopic structures in a roughly 125-million-year-old fossil feather were actually a type of melanosome (SN: 8/2/08, p. 10). These pigment pouches rest inside pigment cells and, in this particular fossil feather, might have delivered a blackish hue, like a blackbird’s.
Scientists had noticed similar structures inside fossilized skin and feathers since the early 1980s. But people assumed that these structures were remnants of bacteria — perhaps decomposers that feasted on the dead animals, says paleontologist Martin Sander of the University of Bonn in Germany.
The new, colorful interpretation sparked a flurry of research, and scientists have since spotted what appear to be melanosomes in all kinds of fossilized animals. Paleontology, in fact, is now awash in colors and patterns. Pigment pods may have painted reddish-brown speckles on the face of a Late Jurassic theropod, brushed chestnut stripes on a long-tailed dino from China and made the plumage of a four-winged dinosaur called Microraptor iridescent. That shimmery dinosaur “probably had a weak, glossy iridescence all over its body,” says evolutionary biologist Matthew Shawkey of Ghent University in Belgium. His team deduced Microraptor’s color from the shape of its melanosomes. Modern melanosomes generally carry a mixture of two melanin pigments: dark brown-black eumelanin and red-yellow pheomelanin. Scientists have linked color in mammals and birds to melanosome shape — a meatball shape for reddish brown hues, for example, and a sausage shape for darker colors.
In iridescent feathers, melanosomes tend to be even thinner, Shawkey says. Microraptor’s melanosomes looked like skinny sausages — similar to those seen in the feathers of modern crows and ravens, says Shawkey, who reported the findings with Vinther and colleagues in Science in 2012 (SN Online: 3/9/12).
Three years later, Vinther laid out the case for inferring color — and ancient histories — from fossilized pigments in a review in Bioessays. Not only can the distinctive shapes of melanosomes offer clues, he noted, but chemical tests can help detect the presence of melanin itself. Finding this pigment in fossils, he argued, puts the old bacteria hypothesis to rest.
Schweitzer and colleagues disagreed with Vinther’s take in a review published in Bioessays later in 2015. Researchers need to be cautious when deducing the hues of extinct animals, the scientists wrote. Any melanosome look-alikes in fossilized feathers or skin could actually be microbes. After all, microbes are everywhere. “These animals died in an environment that was not sterile and free from microbes,” Schweitzer says. “Think about it. If you take a piece of chicken and throw it out in your backyard, how long does it take for microbes to overgrow that chicken?”
The tiny organisms are hardy, too. Both microbes and the sticky biofilms they form are preserved in the fossil record. And, Schweitzer says, microbes and melanosomes overlap completely in shape and size, which makes the two tough to tell apart. What’s more, some microbes actually make melanin themselves; detecting the pigment in a fossil is not a rock-solid sign that the ancient animal was black, brown or freckled.
It’s not that Schweitzer or Bioessays coauthor Johan Lindgren, a geologist at Lund University in Sweden, doubt that melanosomes can leave traces in the fossil record. The issue, Lindgren says, is that not all round structures you find are melanosomes.
Chemical tests could help distinguish the two. Bacteria, for example, leave behind traces that can be identified with pyrolysis gas chromatography-mass spectrometry. But that requires samples to be vaporized. “It can mean destroying much of what you are trying to study,” says geochemist Roy Wogelius of the University of Manchester in England. “So it’s not always possible.”
Vinther’s new work isn’t likely to settle the debate. In fact, people were arguing both sides in October at a meeting of the Society of Vertebrate Paleontology in Salt Lake City.
Arindam Roy, a Bristol colleague of Vinther’s, reported size differences between fossilized melanosomes and bacteria growing on decaying chicken feathers in the lab. Alison Moyer, an N.C. State colleague of Schweitzer’s, said that looks weren’t enough. Finding keratin, a protein that typically surrounds melanosomes, could serve as evidence for pigments in fossils.
From color to camouflage The fossil described in Vinther’s new paper is “spectacular,” Schweitzer says. “It’s got skin all over the place. I can’t think of too many dinosaur specimens that are preserved like this.”
The dinosaur lies on its back, flattened in a slab of volcanic rock. Skin covers a completely intact skeleton, and dozens of long bristles poke from the tail. Psittacosaurus, an herbivore that lived some 120 million years ago, walked on two legs and would have reached about half a meter in height. “It would have been a supercute animal,” Vinther says. “It’s got this wide face and looks a little bit like E.T.”
Black material speckles the dinosaur’s body, tail and face. Vinther believes the material is the ancient remains of pigment. His team examined samples chipped from the fossil and saw what he considers the telltale orbs of melanosomes — mostly impressions in the rock but also some microbodies, the 3-D structures themselves.
Based on the dinosaur’s pigment patterns, it would have had a dark back that faded to a lighter belly. That type of coloring, called countershading, shows up in animals from penguins to fish and may act as a form of camouflage. It lightens parts of the body typically in shadow, and darkens parts typically exposed to light. “If you want to hide, it makes sense to try and obliterate those shadows,” Rowland says.
Their prediction for diffuse light matched the model painted like Psittacosaurus. “It’s like what we see in forest-living animals,” Vinther says. “This thing was camouflaged.” Lingering doubts Going from fossil to forest may be more of a leap than a step, other scientists suggest.
Psittacosaurus’ skin very well may contain ancient pigments, Wogelius says. “I don’t think it’s a crazy idea.” But, he adds, of Vinther’s group: “I don’t think they’ve proved what they claim.”
Vinther’s team, for exampl e, used just four tiny fossil samples to extrapolate the coloring of the whole dinosaur. “I think it’s a bit of an overreach,” Wogelius says.
Schweitzer also notes that the specimen was varnished, presumably to protect the bones and soft tissues. It happened before Vinther and colleagues got their hands on the dinosaur and makes it impossible to perform the chemical tests that would bolster the claim for pigments. “Varnish is horribly destructive to fossils,” she says. “It totally ruins the specimen for other types of analysis.”
Vinther argues that his team has chemically analyzed other fossils and found evidence of melanin — not bacteria. The microbodies in those fossils look just like the ones in Psittacosaurus, he says.
Vinther’s team also saw evidence of just one kind of microbody, and it had a distinct round shape. If the structures were actually bacteria, he says, you’d expect to see a whole range of shapes and sizes. “Some of them would be shaped like corkscrews, some would have flagella, some would be humongous, some would be tiny.”
That’s the tricky part with bacteria, counters Lindgren. “In some cases you can have a huge consortium, but in other cases you can have one single type.” Vinther’s interpretation has its supporters. “I was skeptical at first,” Sander says, “but now there’s been such an array of these little bodies that it’s pretty clear that at least some of them are not bacteria.” Despite some continuing controversy, Sander says many paleontologists now accept that microstructures in fossils may be melanosomes.
Additional research, though, “would help the entire community,” he says, “so that there are no longer any lingering doubts.”
Along with chemical tests, Schweitzer suggests, researchers could try transmission electron microscopy, a technique that blasts an electron beam through a thinly sliced sample. With TEM, melanosomes appear as black blobs. Bacteria tend to look different — in some cases, more like fried eggs.
Shawkey, for one, is looking to chemistry. In a paper published online November 14 in Palaeontology, his team used a technique called Raman spectroscopy to help build a case for feather color in a bird that died some 120 million years ago. In the feathers, the researchers spotted the skinny sausages of iridescent melanosomes and chemical signs of the pigment eumelanin. Shawkey thinks the chemical evidence could help “head off any criticism that we might encounter.”
Working through the field’s snags, paleontologists might come together to fill in the hues and tints, and potentially the habits and habitats, of ancient animals that until recently had been known primarily by their bones.
A pair of simultaneous nuclear explosions, one more than 1.6 miles underground and the other 1,000 feet above it, have been proposed as a way to extract huge quantities of natural gas from subterranean rock. Each blast would be … about 2.5 times the size of the bomb used at Hiroshima. By breaking up tight gas-bearing rock formations, a flow of presently inaccessible gas may be made available.… A single-blast experiment, called Project Gasbuggy, is already planned. — Science News, December 17, 1966 Update On December 10, 1967, Project Gasbuggy went ahead, with a 29-kiloton nuclear explosion deep underground in northwestern New Mexico. The blast released natural gas, but the gas was radioactive. The area is still regularly monitored for radioactive contamination. Today, natural gas trapped below Earth’s surface is often extracted via fracking, which breaks up rock using pressurized fluid (SN: 9/8/12, p. 20). Though less extreme, potential links to drinking water contamination and earthquakes have stoked fears about the technique.
With virtual reality finally hitting the consumer market this year, VR headsets are bound to make their way onto a lot of holiday shopping lists. But new research suggests these gifts could also give some of their recipients motion sickness — especially if they’re women.
In a test of people playing one virtual reality game using an Oculus Rift headset, more than half felt sick within 15 minutes, a team of scientists at the University of Minnesota in Minneapolis reports online December 3 in Experimental Brain Research. Among women, nearly four out of five felt sick. So-called VR sickness, also known as simulator sickness or cybersickness, has been recognized since the 1980s, when the U.S. military noticed that flight simulators were nauseating its pilots. In recent years, anecdotal reports began trickling in about the new generation of head-mounted virtual reality displays making people sick. Now, with VR making its way into people’s homes, there’s a steady stream of claims of VR sickness.
“It’s a high rate of people that you put in [VR headsets] that are going to experience some level of symptoms,” says Eric Muth, an experimental psychologist at Clemson University in South Carolina with expertise in motion sickness. “It’s going to mute the ‘Wheee!’ factor.”
Oculus, which Facebook bought for $2 billion in 2014, released its Rift headset in March. The company declined to comment on the new research but says it has made progress in making the virtual reality experience comfortable for most people, and that developers are getting better at creating VR content. All approved games and apps get a comfort rating based on things like the type of movements involved, and Oculus recommends starting slow and taking breaks. But still some users report getting sick.
The new study confirms these reports. A team led by Thomas Stoffregen, a kinesiologist who has been studying motion sickness for decades, tested the susceptibility of two sets of 18 male and 18 female undergraduates during two different VR games using an Oculus Rift DK2 headset. The first game, which involved using head motions to roll a virtual marble through a maze, made 22 percent of the players feel sick within the 15 minutes they were asked to play.
Another 36 students played the horror game Affected, using a hand-held controller to navigate a creepy building. This time, 56 percent felt sick within 15 minutes. Fourteen of 18 women, nearly 78 percent, were affected, compared with just over 33 percent of men. Though the study tested only an Oculus Rift, other companies’ VR headsets based on similar technology may have similar issues. This gender difference shows up in almost any situation that can cause motion sickness, like a moving car or a rocking boat. But Stoffregen says the disparity can’t be explained by the most widely accepted theory of motion sickness, which suggests that it’s caused by a mismatch between the motion your body is sensing and what your eyes are seeing, like when you read in a moving car. With VR, the theory goes, your eyes think you’re moving, but your body feels stationary, and this makes you feel sick.
Stoffregen thinks motion sickness is instead caused by things that disrupt your balance, like a boat pitching over a wave. And if you try to stabilize your body in the virtual world you see — say, by leaning into a virtual turn — instead of in the physical world you’re in, you can lose stability.
Men and women are typically different shapes and sizes, so they differ in the subtle, subconscious movements that keep their bodies balanced, known as postural sway, Stoffregen says. This difference makes women more susceptible to motion sickness, he claims. For the new study, he measured participants’ balancing motions before they played the games and found a measurable difference in sway between those who reported feeling sick and those who didn’t.
Because motion sickness is a complicated set of symptoms, self-reporting by participants may not be a reliable way to measure it, Muth argues. And, he says, “I would say the science isn’t there yet to draw that conclusion” about gender bias, adding he’d like to see the result replicated with a larger group.
Even so, with VR potentially poised to jump from the gaming world into more mainstream aspects of society — Facebook CEO Mark Zuckerberg says he wants “a billion people on Facebook in virtual reality as soon as possible” — a gender disparity could become a real problem, especially if VR enters the workplace, Stoffregen says. “If it were only games, it wouldn’t matter, and nobody would care.”
It was barely more than half a century ago that the Nobel Prize–winning virologist Sir Frank Macfarlane Burnet mused about the demise of contagions. “To write about infectious disease,” he wrote in 1962, “is almost to write of something that has passed into history.”
If only. In the past several decades, over 300 infectious pathogens have either newly emerged or emerged in new places, causing a steady drumbeat of outbreaks and global pandemic scares.
Over the course of 2016, their exploits reached a crescendo. Just as the unprecedented outbreak of Ebola in West Africa was collapsing in early 2016, the World Health Organization declared Zika virus, newly erupted in the Americas, an international public health emergency. What would balloon into the largest outbreak of yellow fever in Angola in 30 years had just begun. A few months later, scientists reported the just-discovered “superbug” mcr-1 gene in microbes collected from humans and pigs in the United States (SN Online: 5/27/16). The gene allows bacteria to resist the last-ditch antibiotic colistin, bringing us one step closer to a looming era of untreatable infections that would transform the practice of medicine. Its arrival presaged yet another unprecedented event: the convening of the United Nations General Assembly to consider the global problem of antibiotic-resistant bugs. It was only the fourth time over its 70-plus-year history that the assembly had been compelled to consider a health challenge. It’s “huge,” says University of Toronto epidemiologist David Fisman. But even as UN delegates arrived for their meeting in New York City in September, another dreaded infection was making headlines again. The international community’s decades-long effort to end the transmission of polio had unraveled. In 2015, the WHO had declared Nigeria, one of the three last countries in the world that suffered the infection, free of wild polio. By August 2016, it was back. Millions would have to be vaccinated to keep the infection from establishing a foothold. Three fundamental, interrelated factors fuel the microbial comeback, experts say. Across the globe, people are abandoning the countryside for life in the city, leading to rapid, unplanned urban expansions. In crowded conditions with limited access to health care and poor sanitation, pathogens like Ebola, Zika and influenza enjoy lush opportunities to spread. With more infections mingling, there are also more opportunities for pathogens to share their virulence genes.
At the same time, global demand for meat has quadrupled over the last five decades by some estimates, driving the spread of industrial livestock farming techniques that can allow benign microbes to become more virulent. The use of colistin in livestock agriculture in China, for example, has been associated with the emergence of mcr-1, which was first discovered during routine surveillance of food animals there. Genetic analyses suggest that siting factory farms full of chickens and pigs in proximity to wild waterfowl has played a role in the emergence of highly virulent strains of avian influenza. Crosses of Asian and North American strains of avian influenza caused the biggest outbreak of animal disease in U.S. history in 2014–2015. Containing that virus required the slaughter of nearly 50 million domesticated birds and cost over $950 million. Worryingly, some strains of avian influenza, such as H5N1, can infect humans. The thickening blanket of carbon dioxide in the atmosphere resulting from booming populations of people and livestock provides yet another opportunity for pathogens to exploit. Scientists around the world have documented the movement of disease-carrying creatures including mosquitoes and ticks into new regions in association with newly amenable climatic conditions. Climate scientists predict range changes for bats and other animals as well. As the organisms spread into new ranges, they carry pathogens such as Ebola, Zika and Borrelia burgdorferi(a bacterium responsible for Lyme disease) along with them. Since we can rarely develop drugs and vaccines fast enough to stanch the most dangerous waves of disease, early detection will be key moving forward. Researchers have developed a welter of models and pilot programs showing how environmental cues such as temperature and precipitation fluctuations and the insights of wildlife and livestock experts can help pinpoint pathogens with pandemic potential before they cause outbreaks in people. Chlorophyll signatures, a proxy for the plankton concentrations that are associated with cholera bacteria, can be detected from satellite data, potentially providing advance notice of cholera outbreaks.
Even social media chatter can be helpful. Innovative financing methods, such as the World Bank’s recently launched Pandemic Emergency Financing Facility — a kind of global pandemic insurance policy funded by donor countries, the reinsurance market and the World Bank — could help ensure that resources to isolate and contain new pathogens are readily available, wherever they take hold. Right now, emerging disease expert Peter Daszak points out, “we wait for epidemics to emerge and then spend billions on developing vaccines and drugs.” The nonprofit organization that Daszak directs, EcoHealth Alliance, is one of a handful that instead aim to detect new pathogens at their source and proactively minimize the risk of their spread.
Burnet died in 1985, two years after the discovery of HIV, one of the first of the latest wave of new pathogens. His vision of a contagion-free society was that of a climber atop a foothill surrounded by peaks, mistakenly thinking he’d reached the summit. The challenge of surviving in a world of pathogens is far from over. In many ways, it’s only just begun.
SAN FRANCISCO — One climate doomsday scenario can be downgraded, new research suggests.
Decades of atmospheric measurements from a site in northern Alaska show that rapidly rising temperatures there have not significantly increased methane emissions from the neighboring permafrost-covered landscape, researchers reported December 15 at the American Geophysical Union’s fall meeting.
Some scientists feared that Arctic warming would unleash large amounts of methane, a potent greenhouse gas, into the atmosphere, worsening global warming. “The ticking time bomb of methane has clearly not manifested itself yet,” said study coauthor Colm Sweeney, an atmospheric scientist at the University of Colorado Boulder. Emissions of carbon dioxide — a less potent greenhouse gas — did increase over that period, the researchers found. The CO2 rise “is still bad, it’s just not as bad” as a rise in methane, said Franz Meyer, a remote sensing scientist at the University of Alaska Fairbanks who was not involved in the research. The measurements were taken at just one site, though, so Meyer cautions against applying the results to the entire Arctic just yet. “This location might not be representative,” he said.
Across the Arctic, the top three meters of permafrost contain 2.5 times as much carbon as the CO2 released into the atmosphere by human activities since the start of the Industrial Revolution. As the Arctic rapidly warms, these thick layers of frozen soil will thaw and some of the carbon will be converted by hungry microbes into methane and CO2, studies that artificially warmed permafrost have suggested. That carbon will have a bigger impact on Earth’s climate as methane than it will as CO2. Over a 100-year period, a ton of methane will cause about 25 times as much warming as a ton of CO2.
A research station in Alaska’s northernmost city, Barrow, has been monitoring methane concentrations in the Arctic air since 1986 and CO2 since 1973. An air intake on a tower about 16.5 meters off the ground constantly sniffs the air, taking measurements. Barrow has warmed more than twice as fast as the rest of the Arctic over the last 29 years. This rapid warming “makes this region of the Arctic a great little incubation test to see what happens when we have everything heating up much faster,” Sweeney said.
Over the course of a year, methane concentrations in winds wafting from the nearby tundra rise and fall with temperatures, the Barrow data show. Since 1986, though, seasonal methane emissions have remained largely stable overall. But concentrations of CO2 in air coming from over the tundra, compared with over the nearby Arctic Ocean, have increased by about 0.02 parts per million per year since 1973, the researchers reported.
The lack of an increase in methane concentrations could be caused by the thawing permafrost allowing water to escape and drying the Arctic soil, Sweeney proposed. This drying would limit the productivity of methane-producing microbes, potentially counteracting the effects of warming. Tracking Arctic wetness will be crucial for predicting future methane emissions in the region, said Susan Natali, an Arctic scientist at the Woods Hole Research Center in Falmouth, Mass. Studies have shown increased methane emissions from growing Arctic lakes, she points out. “We’re going to get both carbon dioxide and methane,” she said. “It depends on whether areas are getting wetter or drier.”
NEW ORLEANS, La. – Skin that mostly hangs loose around hagfishes proves handy for living through a shark attack or wriggling through a crevice.
The skin on hagfishes’ long, sausage-style bodies is attached in a line down the center of their backs and in flexible connections where glands release slime, explained Douglas Fudge of Chapman University in Orange, Calif. This floating skin easily slip-slides in various directions. A shark tooth can puncture the skin but not stab into the muscle below. And a shark attack is just one of the crises when loose skin can help, Fudge reported January 5 at the annual meeting of the Society for Integrative and Comparative Biology. Hagfishes can fend off an attacking shark by quick-releasing a cloud of slime. Yet video of such events shows that a shark can land a bite before getting slimed. To figure out how hagfishes might survive such wounds, Fudge and colleagues used an indoor guillotine to drop a large mako shark tooth into hagfish carcasses. With the skin in its naturally loose state, the tooth readily punched through skin but slipped away from stabbing into the body of either the Atlantic (Myxine glutinosa) or Pacific (Eptatretus stoutii) hagfish species. But when the researchers glued the skin firmly to the hagfish muscle so the skin couldn’t slip, the tooth typically plunged into inner tissue. For comparison, the researchers tested lampreys, which are similarly tube-shaped but with skin well-fastened to their innards. When the guillotine dropped on them, the tooth often stabbed directly into flesh. The finding makes sense to Theodore Uyeno of Valdosta State University in Georgia, whose laboratory work suggests how loose skin might work in minimizing damage from shark bites. He and colleagues have tested how hard it is to puncture swatches of skin from both the Atlantic and Pacific species. As is true for many other materials, punching through a swatch of hagfish skin held taut didn’t take as long as punching through skin patches allowed to go slack, he said in a January 5 presentation at the meeting. Even a slight delay when a sharp point bears down on baggy skin might allow the hagfish to start dodging and sliming.
But Michelle Graham, who studies locomotion in flying snakes at Virginia Tech, wondered if puncture wounds would be a drawback to such a defense. A hagfish that avoids a deep stab could still lose blood from the skin puncture. That’s true, said Fudge, but the loss doesn’t seem to be great. Hagfish have unusually low blood pressure, and video of real attacks doesn’t show great gushes.
Hagfish blood also plays a part in another benefit of loose skin — an unusual ability to wiggle through cracks, Fudge reported in a second talk at the meeting. One of his students built an adjustable crevice and found that both Atlantic and Pacific hagfishes can contort themselves through slits only half as wide as their original body diameter. Videos show skin bulging out to the rear as the strong pinch of the opening forces blood backward.
The cavity just under a hagfish’s skin can hold roughly a third of its blood. Forcing that reservoir backward can help shrink the body diameter. Fortunately the inner body tapers at the end, Fudge said. So as blood builds up, “they don’t explode.”