Jugglers rely on repetitive rhythmic motions to keep multiple balls aloft. Similar forms of rhythmic movement are also common in the animal world, where effective locomotion is equally important to a swift-moving gazelle and to the cheetah that’s chasing it, say researchers.
“It turns out that the art of juggling provides an interesting window into many of the same questions that you try to answer when you study forms of locomotion, such as walking or running,” says Noah Cowan, an associate professor of mechanical engineering at Johns Hopkins University who supervised a recent study published Journal of Neurophysiology.
“In our study, we had participants stand still and use their hands in a rhythmic way. It’s very much like watching them move their feet as they run. But we used juggling as a model for rhythmic motor coordination because it’s a simpler system to study.”
Specifically, Cowan and his colleagues wanted to look at how the brain uses vision and touch to control this type of behavior. To do so, they set up a simple virtual juggling scenario. Participants held a real-world paddle connected to a computer and were told to bounce an on-screen ball repeatedly up to a target area between two lines, also drawn on the monitor.Haptic feedback
In some trials, the participants had only their vision to guide them. In others experiments, whenever the digital ball hit the onscreen paddle the participants also received a brief impulse on their real-world paddle. This mimicked the sensation they would feel if a real ball had actually struck the paddle they were holding.
With the added touch sensation—called haptic feedback—the participants made about half as many errors, the researchers report.Related Articles On Futurity
- Stanford UniversitySelf-healing plastic 'skin' feels touch
- University of SheffieldDeaf adults have highly tuned vision
- Carnegie Mellon UniversityWearable cameras film actors move in the wild
“We have a pretty good understanding as to why,” says Cowan, who has been an amateur juggler since middle school. “One of the tricky challenges in juggling is catching a rhythm; that is, getting yourself entrained with the movement of the ball. It’s about timing your own action with the action in the environment. When you get the pulse of haptic feedback at the exact moment the ball hits the paddle, it give you a precise sense of the timing for the juggling pattern that you’re trying to achieve.”
“The human nervous system gets feedback all of the time from our sense of vision. But the important thing about the sense of touch while juggling is that we get a precise timing cue that complements the continuous visual feedback. This timing cue is very important for us to get the rhythm of the juggling task,” explains M. Mert Ankarali, a mechanical engineering doctoral student who was lead author of the study.
A more surprising discovery was that adding the touch feedback didn’t seem to improve the participants’ ability to correct for any juggling errors they made while trying to hit the ball into the target zone. But it did enable them to make fewer errors overall.
“The haptic sensation is just a tiny bit of feedback that’s provided once per juggling cycle,” Cowan says. “Yet that tiny bit of information seems to be critical for people to improve their juggling performance. We think that’s because while vision provides excellent spatial and positioning information, the haptic information provides very important timing information.”
When humans and animals walk or run, Cowan adds, their sense of touch plays a key role. As the runner’s feet touch the ground, they alert the nervous system to adjust the movement of the legs to accommodate changes in the running surface. He also notes that the brain’s ability to instantly integrate information coming from both the eyes and the sense of touch is a critical part of successful running, juggling, and other repetitive movements.
The researchers say that future studies of the connection between sensory feedback, timing, and limb movements could help clinicians to better understand how some neurological diseases such as sensory ataxia might disrupt the brain’s timing of movements by arms and legs. Future findings may also assist engineers who are trying to make touch-sensitive artificial limbs and robots that move as skillfully as animals in the wild.
Source: Johns Hopkins University
Romantic love tends to light up the same reward areas of the brain that are activated by cocaine. But new research shows that selfless love—a deep and genuine wish for the happiness of others—actually turns off the brain’s reward centers.
“When we truly, selflessly wish for the well-being of others, we’re not getting that same rush of excitement that comes with, say, a tweet from our romantic love interest, because it’s not about us at all,” says Judson Brewer, adjunct professor of psychiatry at Yale University now at the University of Massachusetts.
As reported in the journal Brain and Behavior, the neurological boundaries between these two types of love become clear in fMRI scans of experienced meditators.Related Articles On Futurity
- University at BuffaloPulses of light stop binge drinking in rats
- University of Southern CaliforniaScientists peek at memories being made
- University of California, BerkeleyWhy sleepy brains crave doughnuts
The reward centers of the brain that are strongly activated by a lover’s face (or a picture of cocaine) are almost completely turned off when a meditator is instructed to silently repeat sayings such as “May all beings be happy.”
Such mindfulness meditations are a staple of Buddhism and are now commonly practiced in Western stress reduction programs.
The tranquility of this selfless love for others—exemplified in such religious figures such as Mother Teresa or the Dalai Llama—is diametrically opposed to the anxiety caused by a lovers’ quarrel or extended separation. And it carries its own rewards.
“The intent of this practice is to specifically foster selfless love—just putting it out there and not looking for or wanting anything in return,” Brewer says.
“If you’re wondering where the reward is in being selfless, just reflect on how it feels when you see people out there helping others, or even when you hold the door for somebody the next time you are at Starbucks.”
Source: Yale University
A new sweat test for cystic fibrosis provides more detailed information than current tests and may lead to new treatments for the disease.
The test shows that smaller amounts of a particular protein are necessary to stop cystic fibrosis symptoms than previously thought.Related Articles On Futurity
- Princeton UniversityFuture vaccines may target fierce antibodies
- McGill UniversityHow immune cells single out invaders
- University of Texas at AustinTo infect a cell, virus ‘feelers’ take a walk
“I was amazed it worked out as well as it did,” says Jeffrey Wine, a professor of psychology and biology who is the director of the Cystic Fibrosis Research Laboratory at Stanford University.
Wine and colleagues described the test in October in the journal PLOS ONE. Since then, they have used the test to measure protein levels in patients taking a cystic fibrosis drug. The latest findings also are published in PLOS ONE.
Cystic fibrosis is a recessive genetic disorder that disables a key protein, called the cystic fibrosis transmembrane conductance regulator, or CFTR, that is responsible for transferring fluid and minerals in and out of cells.
The effect on the 30,000 Americans diagnosed with the condition is debilitating. Patients suffer from chronic lung infections, male sterility, and a host of other symptoms. In the past, carriers struggled to survive past infancy.
Doctors usually treat cystic fibrosis by tackling symptoms as they appear. Very few drugs target the underlying problem: a patient’s CFTR is broken, damaged, or missing. Defects vary greatly: the entire protein might be missing, or it could have just a few flaws. Current tests, which measure the amount of chloride in sweat, can’t precisely identify how much functioning CFTR is present.A lower target
The new test determines the ratio between two types of a person’s sweat by using dyes to form bubbles on the skin. That ratio accounts for differences in sweat volume—between a conditioned athlete and a sedentary person, for example—and reveals an individual’s CFTR levels.
The work shows that even healthy people have varying levels of CFTR and that only a small amount of CFTR is needed to remain disease-free.
“The biggest surprise for me was how small the response was. I don’t think anybody expected that,” Wine says.
Therefore, drug developers have a lower target: they only need to restore 10 percent of CFTR functionality to relieve symptoms. Also, patients can be treated with drugs that supplement their personal CFTR levels to relieve symptoms.
That is particularly important because people with the same genetic flaw can have different amounts of CFTR, Wine says.
For the study, researchers examined the CFTR levels in eight subjects with cystic fibrosis. Six of the patients were taking ivacaftor, a drug currently available to treat some types of cystic fibrosis. Ivacaftor boosted CFTR levels as expected, but it also increased CFTR levels in a type of cystic fibrosis it is not currently designed to treat, Wine says.
Next, the researchers plan to examine differences in CFTR in healthy individuals and hopes to eventually determine the precise amount of CFTR needed to alleviate symptoms.
Source: Stanford University
The post Sweat test may lead to better cystic fibrosis therapy appeared first on Futurity.
In the tech world, coolness takes more than just good looks. Technology users must consider a product attractive, original, and edgy before they label those products as cool, according to researchers.
That coolness can turn tepid if the product appears to be losing its edginess, they add.
“Everyone says they know what ‘cool’ is, but we wanted to get at the core of what ‘cool’ actually is, because there’s a different connotation to what cool actually means in the tech world,” says S. Shyam Sundar, professor of communications at Penn State and co-director of the Media Effects Research Laboratory.
The researchers found that a cool technology trend may move like a wave. First, people in groups—subcultures—outside the mainstream begin to use a device. The people in the subculture are typically identified as those who stand out from most of the people in the mainstream and have an ability to stay a step ahead of the crowd, according to the researchers.Related Articles On Futurity
- University of MelbourneIn US, YouTube political ads get nasty
- University of WarwickWilling to pay 3x more for iPhone 5?
- Cornell UniversityCan 'me, me, me' be good for workplace 'we'?
Once a device gains coolness in the subculture, the product becomes adopted by the mainstream.
However, any change to the product’s subculture appeal, attractiveness, or originality will affect the product’s overall coolness, according to the researchers, who report their findings in the current issue of the International Journal of Human-Computer Studies. If a product becomes more widely adopted by the mainstream, for example, it becomes less cool.The big challenge
“It appears to be a process,” Sundar says. “Once the product loses its subculture appeal, for example, it becomes less cool, and therein lies the challenge.”
The challenge is that most companies want their products to become cool and increase sales, Sundar says. However, after sales increase, the products become less cool and sales suffer. To succeed, companies must change with the times to stay cool.
“It underscores the need to develop an innovation culture in a company,” Sundar says. “For a company to make products that remain cool, they must continually innovate.”
However, products that have fallen out of favor can have coolness restored if the subculture adopts the technology again. For example, record players, which were replaced in coolness by digital files, are beginning to increase in popularity with the subculture, despite their limited usefulness. As a result, participants in a survey considered the record players as cool.
The researchers asked 315 college students to give their opinions on 14 different products based on the elements of coolness taken from current literature. Previously, researchers believed that coolness was largely related to a device’s design and originality.
“Historically, there’s a tendency to think that cool is some new technology that is thought of as attractive and novel,” says Sundar. “The idea is you create something innovative and there is hype—just as when Apple is releasing a new iPhone or iPad—and the consumers that are standing in line to buy the product say they are buying it because it’s cool.”It’s not about utility
A follow-up study with 835 participants from the US and South Korea narrowed the list to four elements of coolness—subculture appeal, attractiveness, usefulness, and originality—that arose from the first study.
In a third study of 317 participants, the researchers found that usefulness was integrated with the other factors and did not stand on its own as a distinguishing trait of coolness.
“The utility of a product, or its usefulness, was not as much of a part of coolness as we initially thought,” says Sundar.
Such products as USB drives and GPS units, for example, were not considered cool even though they were rated high on utility. On the other hand, game consoles like Wii and Xbox Kinect were rated high on coolness, but low on utility. However, many products ranking high on coolness—Macbook Air, Prezi software, Instagram, and Pandora—were also seen as quite useful, but utility was not a determining factor.
“The bottom line is that a tech product will be considered cool if it is novel, attractive, and capable of building a subculture around it,” says Sundar.
Sundar worked with Daniel J. Tamul, assistant professor of communications at Indiana University-Purdue University, Fort Wayne, and Mu Wu, a graduate student at Penn State.
Source: Penn State
For the first time, neuroscientists have systematically identified the white matter “scaffold” of the human brain, the critical communications network that supports brain function.
Their work, published today in the journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease. By detailing the connections that have the greatest influence over all other connections, the researchers offer not only a first map of core white matter pathways, but also show which connections may be most vulnerable to damage.Related Articles On Futurity
- University of Southern CaliforniaStereotypes about aging can make memory slump
- Monash UniversityWhy psychosis strikes some family members and not others
- University of RochesterWhen the brain forgets to take out the trash
“We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function,” says senior author John Darrell Van Horn of the University of Southern California Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging at USC.
“While all connections in the brain have their importance, there are particular links which are the major players,” Van Horn says.White matter injury
Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.
They found that the most important areas of white and gray matter don’t always overlap. Gray matter is the outermost portion of the brain containing the neurons where information is processed and stored. Past research has identified the areas of gray matter that are disproportionately affected by injury.
But the current study shows that the most vulnerable white matter pathways—the core “scaffolding”—are not necessarily just the connections among the most vulnerable areas of gray matter, helping explain why seemingly small brain injuries may have such devastating effects.
“Sometimes people experience a head injury which seems severe but from which they are able to recover. On the other hand, some people have a seemingly small injury which has very serious clinical effects,” says Van Horn, associate professor of neurology at the Keck School of Medicine of USC.
“This research helps us to better address clinical challenges such as traumatic brain injury and to determine what makes certain white matter pathways particularly vulnerable and important.”Compare to social networks
The researchers compare their brain imaging analysis to models used for understanding social networks. To get a sense of how the brain works, Irimia and Van Horn did not focus only on the most prominent gray matter nodes—which are akin to the individuals within a social network. Nor did they merely look at how connected those nodes are.
Rather, they also examined the strength of these white matter connections, i.e. which connections seemed to be particularly sensitive or to cause the greatest repercussions across the network when removed. Those connections that created the greatest changes form the network “scaffold.”
“Just as when you remove the internet connection to your computer you won’t get your email anymore, there are white matter pathways which result in large scale communication failures in the brain when damaged,” Van Horn says.
When white matter pathways are damaged, brain areas served by those connections may wither or have their functions taken over by other brain regions, the researchers explain.
Irimia and Van Horn’s research on core white matter connections is part of a worldwide scientific effort to map the 100 billion neurons and 1,000 trillion connections in the living human brain, led by the Human Connectome Project and the Laboratory of Neuro Imaging at USC.
Irimia notes that, “these new findings on the brain’s network scaffold help inform clinicians about the neurological impacts of brain diseases such as multiple sclerosis, Alzheimer’s disease, as well as major brain injury. Sports organizations, the military, and the US government have considerable interest in understanding brain disorders, and our work contributes to that of other scientists in this exciting era for brain research.”
The NIH supported the research.
Scientists say mysterious streaks that appear and disappear on the surface of Mars are probably related to water, but a definitive answer may be hard to prove.
The streaks are called recurring slope lineae (RSL) because of their shape, annual reappearance, and occurrence generally on steep slopes such as crater walls.Related Articles On Futurity
- Princeton UniversityEinstein validated on cosmic scale
- University of MichiganCould briny drops harbor life on Mars?
- California Institute of TechnologyNew evidence suggests giant ocean on Mars
Researchers looked at 13 confirmed RSL sites using Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) images. They didn’t find any spectral signature tied to water or salts. But they did find distinct and consistent spectral signatures of ferric and ferrous minerals at most of the sites. The minerals were more abundant or featured distinct grain sizes in RSL-related materials as compared to non-RSL slopes.
“We still don’t have a smoking gun for existence of water in RSL, although we’re not sure how this process would take place without water,” says Lujendra Ojha, candidate at Georgia Institute of Technology (Georgia Tech). “Just like the RSL themselves, the strength of the spectral signatures varies according to the seasons. The signatures are stronger when it’s warmer and less significant when it’s colder.”
The lack of water-related absorptions rules out hydrated salts as a spectrally dominant phase on RSL slopes, the researchers say. For example, ferric sulfates have been found elsewhere on Mars and are a potent antifreeze. If such salts are present in RSL, then they must be dehydrated considerably under exposure to the planet’s conditions by the time CRISM observes them in the mid-afternoon.
The researchers looked at every image gathered by the High Resolution Imaging Science Experiment (HiRISE) from March to October of 2011. They hunted for areas that were ideal locations for RSL formation: areas near the southern mid-latitudes on rocky cliffs. They found 200, but barely any of them had RSL.
“Only 13 of the 200 locations had confirmed RSL,” Ojha says. “There were significant differences in abundance and size between sites, indicating that additional unknown factors such as availability of water or salts may play a crucial role in RSL formation.”
Comparing their new observations with images taken in previous years, the team also found that RSL are much more abundant some years than others. Water on Mars today seems elusive at best—there one year, gone the next.
“NASA likes to ‘follow the water’ in exploring the red planet, so we’d like to know in advance when and where it will appear,” says Assistant Professor James Wray. “RSL have rekindled our hope of accessing modern water, but forecasting wet conditions remains a challenge.”
Ojha and Wray are also among several co-authors on another RSL-related paper published this month in Nature Geoscience. That study, led by the University of Arizona’s Alfred McEwen, found some RSL in Valles Marineris near the Martian equator.
Source: Georgia Tech
A small study of hospital admissions shows that people, particularly young adults, who did not get this season’s flu vaccine needed the most intensive treatment.
In an analysis of the first 55 patients treated for flu at Duke University Hospital from November 2013 through January 8, 2014, researchers found that only two of the 22 patients who required intensive care had been vaccinated prior to getting sick.Related Articles On Futurity
- Ouchless sugar needles deliver 'dried' vaccine
- New York UniversityLots of dairy could make acne worse
- Johns Hopkins UniversityTake a pill (not IV). Save a bundle
“Our observations are important because they reinforce a growing body of evidence that the influenza vaccine provides protection from severe illness requiring hospitalizations,” says lead author Cameron Wolfe, assistant professor of medicine at Duke.
“The public health implications are important, because not only could a potentially deadly infection be avoided with a $30 shot, but costly hospitalizations could also be reduced.”
The study is available online in the American Journal of Respiratory and Critical Care Medicine.ICU admissions
Wolfe says this year’s flu season was marked by hospitalizations of previously healthy young people, with a median age of 28.5 years. Among those who were hospitalized at Duke, 48 of the 55 were infected with the H1N1 virus that caused the 2009 pandemic. That outbreak also hit young adults particularly hard.
“We observed a high percentage of hospitalized patients for influenza requiring ICU level care, which appears higher than observed in our hospital during the 2009 pandemic flu season,” says co-author John W. Hollingsworth, associate professor of medicine. “It remains unclear whether the high rate of ICU admissions represents a diagnosis bias or whether the severity of illness being caused by the current H1N1 virus is higher.”
Of the 33 patients admitted to regular wards rather than the ICU, only eleven had been vaccinated; most of those were immune compromised, chronically ill, or were on a medication that weakened the vaccine’s protection.False negative rapid tests
The study also echoes other studies that have highlighted problems with a rapid test for influenza. Wolfe says 22 of the patients treated had been given a rapid influenza test that came up negative for flu, but they were actually positive when tested by other methods. As a result, they had not received anti-viral medications that might have eased flu symptoms had they been taken early.
“Together, our observations during this influenza season support a high prevalence of the H1N1 virus affecting young adults and requiring ICU care, high false negative rates of rapid flu tests, and delay in starting antiviral treatment,” Wolfe says.
“Added to the finding of very low vaccination rates among both hospitalized and ICU admissions, our observations support previous findings that vaccination reduces the severity of disease, and vaccinations should be encouraged as recommended by the US Centers for Disease Control and Prevention.”
Source: Duke University
The post Young adults who skip flu shot show most severe symptoms appeared first on Futurity.
Farmers should work to reduce herbicide drift when spraying fields, researchers urge, as the chemicals create a variety of unintended consequences on neighboring fields and farms.
The researchers found a range of effects—positive, neutral, and negative—when they sprayed the herbicide dicamba on fields that are no longer used for cultivation and on field edges, according to J. Franklin Egan, research ecologist, USDA-Agricultural Research Service. He says the effects should be similar for a related compound, 2,4-D.Related Articles On Futurity
- University of Virginia'Land grabs' may intensify global hunger
- University of California, DavisIn wet times, zebra help cattle bulk up
- University of California, DavisHot pepper genome reveals its spicy evolution
“The general consensus is that the effects of the increased use of these herbicides are going to be variable,” says Egan. “But, given that there is really so much uncertainty, we think that taking precautions to prevent herbicide drift is the right way to go.”
Farmers are expected to use dicamba and 2,4-D on their fields more often because biotechnology companies are introducing crops genetically modified to resist those chemicals. From past experience, 2,4-D and dicamba are the herbicides most frequently involved in herbicide-drift accidents, according to the researchers.
Because the herbicides typically target broadleaf plants, such as wildflowers, they are not as harmful to grasses, Egan says. In the study, the researchers found grasses eventually dominated the field edge test site that was once a mix of broadleaf plants and grass. The old field site showed little response to the herbicide treatments.Herbicides and pests
Herbicide drift was also associated with the declines of three species of herbivores, including pea aphids, spotted alfalfa aphids, and potato leaf hoppers, and an increase in a pest called clover root curculio, Egan says. The researchers found more crickets, which are considered beneficial because they eat weed seeds, in the field edge site.
The researchers, who report their findings in the current issue of Agriculture, Ecosystems and Environment, did not see a drop in the number of pollinators, such as bees, in the fields. However, the relatively small size of the research fields limited the researchers’ ability to measure the effect on pollinators, according to Egan.
“That may be because pollinators are very mobile and the spatial scale of our experiment may not be big enough to show any effects,” Egan says.Reducing drift
Farmers can cut down on herbicide drift by taking a few precautions, according to Egan. They can spray low-volatility herbicide blends, which are less likely to turn to vapors, and use a nozzle design on the sprayer that produces larger droplets that do not easily drift in the wind.
Egan also recommended that farmers follow application restrictions printed on herbicide labels and try to spray on less windy days when possible.
The tests were conducted on two farms in Pennsylvania. One field edge site was located near a forest and alfalfa field. The old field was an acre plot near Penn State’s Russell E. Larson Agricultural Research farm.
Additional scientists at Penn State and Sarah Goslee, a US Department of Agriculture ecologist, contributed to the work, which received support from the Environmental Protection Agency.
Source: Penn State
As climate change unfolds over the next century, plants and animals will need to adapt or shift locations to follow their ideal climate.
A new study provides an innovative global map of where species are likely to succeed or fail in keeping up with a changing climate. The findings appear in the journal Nature.
Researchers analyzed 50 years of sea surface and land temperature data (1960 to 2009). They also projected temperature changes under two future scenarios, one that assumes greenhouse gas emissions are stabilized by 2100 and a second that assumes these emissions continue to increase.Climate sink
The global study, which examines scenarios both on land and in the ocean, demonstrates that climate migration is far more complex than a simple shift toward the poles.Related Articles On Futurity
- Princeton UniversityWithout Amazon forests, weather could mimic El Niño
- Cornell UniversityPatagonia a 'poster child' for glacier loss
- University of ArizonaIn desert cave, microbes feed on water, rocks, and air
“As species move to track their ideal temperature conditions, they will sometimes run into what we call a ‘climate sink,’ where the preferred climate simply disappears leaving species nowhere to go because they are up against a coastline or other barrier,” explains Carrie Kappel, an associate of the University of California, Santa Barbara National Center for Ecological Analysis and Synthesis (NCEAS) and one of the paper’s authors.
“There are a number of those sinks around the world where movement is blocked by a coastline, like in the northern Adriatic Sea or the northern Gulf of Mexico, and there’s no way out because it’s warmer everywhere behind.”‘Close to the margin’
Australia offers a terrestrial example. There, species already experiencing warmer temperatures have started to seek relief by moving to higher elevations, or farther south. However, some species of animals and plants cannot move large distances, and some cannot move at all.
“Species migration can have important consequences for local biodiversity,” says corresponding author Elvira Poloczanska, a research scientist with the Climate Adaptation Flagship of Australia’s national science agency, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Brisbane.
“For example, the dry, flat continental interior of Australia is a hot, arid region where species already exist close to the margin of their thermal tolerances. Some species driven south from monsoonal northern Australia in the hope of cooler habitats may perish in one of the harshest places on Earth.”Assisted migration?
The maps generated from the study data not only show areas where plants and animals may struggle to find new homes in a changing climate but also provide crucial information for targeting conservation efforts—information that could help conservation planners think more strategically about how best to manage biodiversity for future sustainability.
“One of the greatest challenges these days is how to help species survive in the face of climate change,” says co-author Ben Halpern, a professor at UC Santa Barbara’s Bren School of Environmental Science and Management.
“The maps we produced offer a key tool for helping guide these decisions. For example, where species are likely to face climate traps, we will need to explore less traditional actions, such as assisted migration, where people help move species past barriers into their preferred environment.”
“From other work, we know that many species have shifted where they live in ways that match the pattern of temperature change over the last 60 years,” Kappel notes. “This gives us confidence that we can base conservation planning on what we’ve learned about what’s already happening.”
According to Halpern, it’s not a question of whether climate change is happening, but what we can do about it. “The writing is on the wall: species have already started moving in response to climate change,” he says.
“We can either sit back and watch as species get squeezed out of existence and food webs reshuffle or we can try to be proactive in designing conservation strategies. Our research and maps offer a window into what the future of biodiversity will look like, and we have a chance to improve the view from that window.”
Source: UC Santa Barbara
Brown and black bears hibernate during winter to conserve energy and stay warm. But the same isn’t true for polar bears.
Only pregnant polar bears den up for the colder months. So how do the rest survive the extreme Arctic winters?Related Articles On Futurity
- Yale UniversityEvolution took off wormy mollusks’ shells
- Cardiff UniversityPanda genome yields clues to bamboo diet
- University of MelbourneTwins prove fainting is (mostly) in the genes
In a new study, researchers show that genes controlling nitric oxide production in the polar bear genome are different when compared to similar genes in brown and black bears.
“With all the changes in the global climate, it becomes more relevant to look into what sorts of adaptations exist in organisms that live in these high-latitude environments,” says lead researcher Charlotte Lindqvist, assistant professor of biological sciences at the University at Buffalo.
“This study provides one little window into some of these adaptations,” she says. “Gene functions that had to do with nitric oxide production seemed to be more enriched in the polar bear than in the brown bears and black bears. There were more unique variants in polar bear genes than in those of the other species.”Heat instead of energy
Researchers say the genetic adaptations are important because of the crucial role that nitric oxide plays in energy metabolism.
Typically, cells transform nutrients into energy. However, there is a phenomenon called adaptive or non-shivering thermogenesis, where the cells will produce heat instead of energy in response to a particular diet or environmental conditions.
Levels of nitric oxide production may be a key switch triggering how much heat or energy is produced as cells metabolize nutrients, or how much of the nutrients is stored as fat, Lindqvist says.
“At high levels, nitric oxide may inhibit energy production,” says Andreanna Welch, first author and a former postdoctoral researcher with Lindqvist. “At more moderate levels, however, it may be more of a tinkering, where nitric oxide is involved in determining whether—and when—energy or heat is produced.”
In the new study, published in the journal Genome Biology and Evolution, scientists looked at the mitochondrial and nuclear genomes of 23 polar bears, three brown bears, and a black bear.
The research is part of a larger program devoted to understanding how the polar bear has adapted to the harsh Arctic environment. In 2012, Lindqvist and colleagues reported sequencing the genomes of multiple brown bears, black bears, and polar bears.
In an earlier paper in the Proceedings of the National Academy of Sciences, comparative studies between the DNA of the three species uncovered some distinctive polar bear traits, such as genetic differences that may affect the function of proteins involved in the metabolism of fat—a process that’s very important for insulation.
Co-authors include scientists from Penn State, the US Geological Survey Alaska Science Center, Durham University, and the University of California, Santa Cruz. The University at Buffalo and the National Fish and Wildlife Foundation supported the study.
Source: University at Buffalo
Why some animals use noxious scents while others live in social groups to defend themselves against predators is the question that biologists sought to answer through a comprehensive analysis of predator-prey interactions among carnivorous mammals and birds of prey.
“The idea is that we’re trying to explain why certain antipredator traits evolved in some species but not others,” says biologist Theodore Stankowich of California State University, Long Beach.
The findings appear in the online edition of the journal Evolution.Related Articles On Futurity
- University of MichiganJust your garden-variety poisonous catfish
- University of ArizonaHidden cameras catch wild cat travels
- Cornell UniversityTo save wild canids, start with a puppy
Stankowich notes that this study not only explains why skunks are stinky and why banded mongooses live in groups but also breaks new ground in the methodology of estimating predation risks.
Stankowich, Tim Caro of University of California, Davis, and Paul Haverkamp, a geographer who recently completed his PhD at UC Davis, collected data on 181 species of carnivores, a group in which many species are small and under threat from other animals.Day vs. night
They ran a comparison of every possible predator-prey combination, correcting for a variety of natural history factors, to create a potential risk value that estimates the strength of natural selection due to predation from birds and other mammals.
They found that noxious spraying was favored by animals that were nocturnal and mostly at risk from other animals, while sociality was favored by animals that were active during the day and potentially vulnerable to birds of prey.
“Spraying is a good close-range defense in case you get surprised by a predator, so at night when you can’t detect things far away, you might be more likely to stumble upon a predator,” Stankowich says.
Conversely, small carnivores like mongooses and meerkats usually are active during the day, which puts them at risk from birds of prey. Living in a large social group means “more eyes on the sky” in daytime, when threats can be detected further away.
The social animals also use other defenses such as calling out a warning to other members of their group or even mobbing together to bite and scratch an intruder to drive it away.
The project was a major information technology undertaking involving plotting the geographic range overlap of hundreds of mammal and bird species, but will have long-term benefits for ongoing studies.
The researchers plan to make their database, nicknamed the “Geography of Fear,” available to other researchers.
Source: UC Davis
Genetic adaptations found in people living at high elevations on the Tibetan plateau probably originated around 30,000 years ago in peoples related to contemporary Sherpa.
These genes were passed on to more recent migrants from lower elevations via population mixing, and then amplified by natural selection in the modern Tibetan gene pool, a new study shows.Related Articles On Futurity
- University of North Carolina at Chapel HillFor picky eaters, genes matter more than food
- California Institute of TechnologyHow genes 'network' to grow an embryo
- New York UniversityShutting down enzyme offers Fragile X relief
Researchers say the transfer of beneficial mutations between human populations and selective enrichment of these genes in descendent generations represents a novel mechanism for adaptation to new environments.
“The Tibetan genome appears to arise from a mixture of two ancestral gene pools,” says Anna Di Rienzo, professor of human genetics at the University of Chicago and corresponding author of the study.
“One migrated early to high altitude and adapted to this environment. The other, which migrated more recently from low altitudes, acquired the advantageous alleles from the resident high-altitude population by interbreeding and forming what we refer to today as Tibetans.”
High elevations are challenging for humans because of low oxygen levels, but Tibetans spend their lives above 13,000 feet (3,962 meters) with little issue. They are better suited when compared to short-term visitors from low altitude due to physiological traits such as relatively low hemoglobin concentrations at altitude.
Unique to Tibetans are variants of the EGLN1 and EPAS1 genes, key genes in the oxygen homeostasis system at all altitudes. These variants were hypothesized to have evolved around 3,000 years ago, a date which conflicts with much older archaeological evidence of human settlement in Tibet.Evolution as tinkerer
To shed light on the evolutionary origins of these gene variants, Di Rienzo and colleagues obtained genome-wide data from 69 Nepalese Sherpa, an ethnic group related to Tibetans. They were analyzed together with the genomes of 96 unrelated individuals from high-altitude regions of the Tibetan plateau, worldwide genomes from HapMap3 and the Human Genome Diversity Panel, as well as data from Indian, Central Asian, and two Siberian populations, through multiple statistical methods and sophisticated software.
The researchers found that, on a genomic level, modern Tibetans appear to descend from populations related to modern Sherpa and Han Chinese. Tibetans carry a roughly even mixture of two ancestral genomes: one a high-altitude component shared with Sherpa and the other a low-altitude component shared with lowlander East Asians.
The low-altitude component is found at low to nonexistent frequencies in modern Sherpa, and the high-altitude component is uncommon in lowlanders. This strongly suggests that the ancestor populations of Tibetans interbred and exchanged genes, a process known as genetic admixture.
Tracing the history of these ancestor groups through genome analysis, the team identified a population size split between Sherpa and lowland East Asians around 20,000 to 40,000 years ago, a range consistent with proposed archaeological, mitochondria DNA and Y chromosome evidence for an initial colonization of the Tibetan plateau around 30,000 years ago.
“This is a good example of evolution as a tinkerer,” says Cynthia Beall, PhD, professor of anthropology at Case Western Reserve University and co-author on the study.”We see other examples of admixtures. Outside of Africa, most of us have Neanderthal genes—about 2 to 5 percent of our genome—and people today have some immune system genes from another ancient group called the Denisovans.”A new tool
Researchers also found that Tibetans shared specific high-altitude component traits with Sherpa, such as the EGLN1 and EPAS1 gene variants, despite the significant amount of genome contribution from lowland East Asians.
Further analysis revealed these adaptations were disproportionally enhanced in frequency in Tibetans after admixture, strong evidence of natural selection at play. This stands in contrast to existing models that propose selection works through new advantageous mutations or on existing variants becoming beneficial in a new environment.
“The chromosomal locations that are so important for Tibetans to live at high elevations are locations that have an excess of genetic ancestry from their high-altitude ancestral gene pool,” Di Rienzo says. “This is a new tool we can use to identify advantageous alleles in Tibetans and other populations in the world that experienced this type of admixture and selection.”
In addition to the EPAS1 and EGLN1 genes, the researchers discovered two other genes with a strong proportion of high-altitude genetic ancestry, HYOU1 and HMBS. The former is known to be up-regulated in response to low oxygen levels and the latter plays an important role in the production of heme, a major component of hemoglobin.
“There is a strong possibility that these genes are adaptations to high altitude,” Di Rienzo says. “They represent an example of how the ancestry-based approach used in this study will help make new discoveries about genetic adaptations.”
Researchers from Oxford University Clinical Research Unit at Patan Hospital in Nepal and the Mountain Medicine Society of Nepal contributed to the study, which the National Science Foundation supported.
Source: University of Chicago
The post Genetic mix lets Tibetans thrive at high altitudes appeared first on Futurity.
Scientists have used observations of the Big Bang and the curvature of space-time to accurately measure the mass of sub-atomic particles called neutrinos.
By doing so, they have solved a major problem with the current standard model of cosmology.Related Articles On Futurity
- Case Western Reserve University'Gilligan and Skipper' dwarf galaxies discovered
- Indiana UniversityDark-matter search plunges to new depths
- McGill UniversityNature caught 'recycling' a star
The recent Planck spacecraft observations of the cosmic microwave background (CMB)—the fading glow of the Big Bang—highlighted a discrepancy between these cosmological results and the predictions from other types of observations.
The CMB is the oldest light in the universe, and its study has allowed scientists to accurately measure cosmological parameters, such as the amount of matter in the universe and its age. But an inconsistency arises when large-scale structures of the universe, such as the distribution of galaxies, are observed.
“We observe fewer galaxy clusters than we would expect from the Planck results and there is a weaker signal from gravitational lensing of galaxies than the CMB would suggest,” says Adam Moss from the University of Nottingham’s School of Physics and Astronomy.
“A possible way of resolving this discrepancy is for neutrinos to have mass. The effect of these massive neutrinos would be to suppress the growth of dense structures that lead to the formation of clusters of galaxies.”Sub-atomic world
Neutrinos interact very weakly with matter and so are extremely hard to study.
They were originally thought to be massless but particle physics experiments have shown that neutrinos do indeed have mass and that there are several types, known as flavors by particle physicists.
The sum of the masses of these different types has previously been suggested to lie above 0.06 eV (much less than a billionth of the mass of a proton).
Moss and Professor Richard Battye from the University of Manchester have combined the data from Planck with gravitational lensing observations, in which images of galaxies are warped by the curvature of space-time.
They conclude that the current discrepancies can be resolved if massive neutrinos are included in the standard cosmological model. They estimate that the sum of masses of neutrinos is 0.320 +/- 0.081 eV (assuming active neutrinos with three flavors).
“If this result is borne out by further analysis, it not only adds significantly to our understanding of the sub-atomic world studied by particle physicists, but it would also be an important extension to the standard model of cosmology, which has been developed over the last decade,” says Battye.
The paper is published in Physical Review Letters.
Source: University of Nottingham
The post Mass of ghostly neurtrinos solves sub-atomic mystery appeared first on Futurity.
A simple test can quickly detect if a person is infected with a parasite that causes the diarrheal disease cryptosporidiosis.
Lines on paper strips show whether samples taken from the stool of a patient contain genetic DNA from the parasite.
The research is detailed online in a new paper in the journal Analytical Chemistry.
“Diarrheal illness is a leading cause of global mortality and morbidity,” says Rebecca Richards-Kortum, a bioengineer at Rice University and director of the Rice 360˚: Institute for Global Health Technologies. “Parasites such as cryptosporidium are more common causes of prolonged diarrhea. Current laboratory tests are not sensitive, are time-consuming, and require days before results are available.
“A rapid, affordable, accurate point-of-care test could greatly enhance care for the underserved populations who are most affected by parasites that cause diarrheal illness.”
A. Clinton White, director of the Infectious Disease Division at the University of Texas Medical Branch (UTMB) at Galveston, asked Richards-Kortum to help develop a diagnostic test for the parasite.
“I’ve been working with cryptosporidium for more than 20 years, so I wanted to combine her expertise in diagnosis with our clinical interest,” he says. “Recent studies in Africa and South Asia by people using sophisticated techniques show this organism is a very common, under-appreciated cause of diarrheal disease in under-resourced countries.”Drinking water
The parasite is common in the United States, he says, but less than 5 percent of an estimated 750,000 cases are diagnosed every year. In 1993, an outbreak of cryptosporidium in the water supply sickened 400,000 people in Greater Milwaukee, he says.Related Articles On Futurity
- University of NottinghamThis enzyme may be the best way to stop malaria
- Johns Hopkins UniversityThere's no easy cure for bad diagnoses
- University of California, DavisTo beat ‘wiretapping’ parasite, silence RNA
Lead author Zachary Crannell, a graduate student based at Rice’s BioScience Research Collaborative, says the disease, usually transmitted through drinking water, accounts for 20 percent of childhood diarrheal deaths in developing countries.
Cryptosporidiosis is also a threat to people with HIV whose immune system is less able to fight it off, he adds.
“In the most recent global burden-of-disease study, diarrheal disease accounts for the loss of more disability-adjusted life years than any other infectious disease, and cryptosporidiosis is the second leading cause of diarrheal illness.” Crannell says. “Anybody, if it’s not treated, can get dehydrated to the point of death.
“There’s a lot of new evidence that even with asymptomatic cases or cases for which the symptoms have been resolved, there are long-term growth deficits,” he says.Room or body temperature
Current specialized tests that depend on microscopic or fluorescent analysis of stool samples or polymerase chain reactions (PCR) that amplify pathogen DNA are considered impractical for deployment in developing countries because of the need for expensive equipment and/or the electricity to operate it.
The new test depends on recent developments in a recombinase polymerase amplification (RPA) technique that gives similar “gold standard” results to PCR but operates between room and body temperatures.
In Rice’s experiments, samples were prepared with a commercial chemical kit that releases all the DNA and RNA in the small amount of stool tested. The purified nucleic acids are then combined with RPA primers and enzymes tuned to amplify the pathogen of interest, Crannell says.
“If the pathogen DNA is present, these primers will amplify it billions of times to a level that we can easily detect,” he explains. The sample is then flowed over the detection strip, which provides a positive or negative result.
The RPA enzymes are stable in their dried form and can be safely stored at the point of care without refrigeration for up to a year, he says.Requires little equipment
While current tests might catch the disease in samples with thousands of the pathogens, the new technique detects the presence of very few—even one—parasite in a sample. In their experiments, the researchers reported the presence or absence of the disease was correctly identified in 27 of 28 infected and control-group mice and all 21 humans whose stool was tested.
Crannell says the method requires little equipment, because the enzymes that amplify DNA work best at or near body temperature. “You don’t need a thermal cycler (used for PCR analysis); you don’t need external heating equipment. You can hold the sample under your armpit, or put it in your pocket,” he says.
The research team’s goal is to produce a low-cost diagnostic that may also test for the presence of several other parasites, including giardia, the cause of another intestinal disease. The researchers are working to package the components for use in low-resource settings, Crannell says.
The National Institute of Allergy and Infectious Diseases of the National Institutes of Health and the National Science Foundation Graduate Research Fellowship Program supported the research.
Source: Rice University
The post Paper-strip test detects parasite that causes diarrhea appeared first on Futurity.
A tricked-out version of an off-the-shelf digital camera is able to identify, photograph, and even analyze patches of soil or rocks from afar and in extreme close-up, a feat that NASA’s latest Mars rover Curiosity has yet to accomplish.
Researchers figured out how to take advantage of different lens adapters that can be mounted in front of a single camera to enable it to take images ranging from a macroscopic scale—think landscape—all the way down to a microscopic scale—think cells and bacteria—thus spanning at least six orders of magnitude.
The new prototype, called the Astrobiological Imager, is described in the journal Astrobiology.Related Articles On Futurity
- University of ArizonaMars bashed by (only) 200 asteroids a year
- University of Washington Far-out rocky planet is volcanic wasteland
- Cornell UniversityTiny meteoroids pummel Saturn's rings
“For each scale, there is of course one or even several imagers that are superior to our instrument for that particular scale,” says Wolfgang Fink, associate professor in the department of electrical and computer engineering at the University of Arizona. “However, there is no instrument out there that can go across several orders of magnitude.
“Think of the world’s best decathlete as opposed to the world record holders in each individual discipline. That’s the best analogy. Our camera is the best decathlete.”
For example, HiRISE, the High Resolution Imaging Science Experiment instrument aboard NASA’s Mars Reconnaissance Orbiter, has imaged the Red Planet in unprecedented detail. But as a space-borne instrument, it can only resolve features about the size of a kitchen table and is not capable of microscopic imaging. If the table were set with plates or anything smaller, HiRISE wouldn’t know.
The Astrobiological Imager, on the other hand, could image the table from far away, then move closer to take detailed shots of the dinnerware, and finally zoom in to take high-resolution pictures of a single salt crystal left on one of the plates.Like a field biologist on Earth
For the prototype, Fink and his team modified an $85 point-and-shoot camera with parts adding up to less than $100. Mounted on the camera lens is an adapter ring with a special lens that shortens the camera’s minimal focal distance so it can be directly placed on the object and still use its built-in autofocus.
“With the newest generation of digital cameras and their better lenses, you can get down to the limit of what is optically resolvable,” Fink says. “In the time since the prototype was assembled, imaging sensors have become smaller and have more densely packed pixels. With a 20-megapixel camera modified in this way, we could get down to a few hundred nanometers. In other words, the optical limit of a light microscope.”
The idea is to enable a robotic rover exploring another planet with the imaging capabilities of a field biologist on Earth: a pair of eyes, binoculars, a hand lens, a dissecting microscope, and a light microscope.
“The idea is contextual imaging to subsequently zoom in on areas of interest in a nested fashion, until you hit the sweet spot, which you want to image microscopically. For example, to find microbial communities in rock formations.
“Mounted on a rover, our camera would be equipped with a rotating turret containing different adapter lenses. From an astrobiological point of view, you need the context first, so we’d use it in wide-angle mode to look around in search for promising targets, then drive to, say, a rock pile, image individual rocks, then go close to image patches potentially containing life, and then zoom in to produce a microscopic image of anything that might be living on or beneath that rock surface.”Living under rocks
In this fashion, Fink and his team tested their Astrobiological Imager in the Mojave Desert, using it to photograph sandstone outcroppings and scan them for promising patches indicating microbe colonies on the rocks. Moving in closer, they used it to image the growth up close, revealing the close relationship between sand grains and biomass. The team was able to microscopically image a microbial colony living beneath a rock surface.
Equipped with a device that blocks stray light, the imager could use built-in LEDs emitting well-defined light and analyze the reflected light, which would allow researchers to perform a spectral analysis of the sample and get an idea of its chemical composition.
Fink says he is convinced there will be more multipurpose instruments like the Astrobiological Imager in upcoming space missions. The underlying technology of the adapter-based imaging capability is patented.
“In principle, our imager could be used on a mission like the OSIRIS-REx asteroid sample return mission, which is also led by the UA, but too far along obviously,” he says. “NASA is going toward multiuse instruments wherever possible, and they have to work more in tandem with each other. Our prototype fulfills those requirements.”
Researchers from Washington State University, the Desert Research Institute,Quaternary Surveys, and the Planetary Science Institute contributed to the study.
Source: University of Arizona
A technique to deliver HIV-fighting antibodies to mice has proven effective against a strain of HIV found in the real world.
The findings, available in the journal Nature Medicine, suggest that the delivery method might be effective in preventing vaginal transmission of HIV between humans.
“The method that we developed has now been validated in the most natural possible setting in a mouse,” says Nobel laureate David Baltimore, president emeritus and a biology professor at the California Institute of Technology (Caltech). “This procedure is extremely effective against a naturally transmitted strain and by an intravaginal infection route, which is a model of how HIV is transmitted in most of the infections that occur in the world.”VIP boosts antibodies
The new delivery method—called Vectored ImmunoProphylaxis, or VIP for short—is not exactly a vaccine. Vaccines introduce substances such as antigens into the body to try to get the immune system to mount an appropriate attack—to generate antibodies that can block an infection or T cells that can attack infected cells.Related Articles On Futurity
- University of MichiganHIV cells' secret hideout discovered
- University of California, Santa BarbaraSuper precise DNA sensor, inspired by nature
- Brown UniversityStigma-free way to talk about sex
In the case of VIP, a harmless virus is injected and delivers genes to the muscle tissue, instructing it to generate specific antibodies.
The researchers emphasize that the work was done in mice and that the leap from mice to humans is large. The team is now working with the Vaccine Research Center at the National Institutes of Health to begin clinical evaluation.
Additional researchers from Caltech and UCLA contributed to the study, which received support from the UCLA Center for AIDS Research, the National Institutes of Health, and the Caltech-UCLA Joint Center for Translational Medicine.
Seeking a solution to decoherence—the “noise” that prevents quantum processors from functioning properly—scientists have developed a strategy for linking quantum bits together into voting blocks, which significantly boosts their accuracy.
In a paper published in Nature Communications, the team found that its method results in at least a fivefold increase in the probability of reaching the correct answer when the processor solves the largest problems, involving hundreds of qubits, tested by the researcher.
The team, led by Daniel Lidar—director of the USC-Lockheed Martin Quantum Computing Center at the University of Southern California Viterbi School of Engineering—ran its tests on the 512-quantum-bit D-Wave Two processor.Related Articles On Futurity
- University of California, Santa BarbaraRoom-temp qubit from semiconductor ‘defect’
- Coast Guard computer models aid search, rescue
- University of LeedsSurgery feels ‘hands on’ when it's not
“We have demonstrated that our quantum annealing correction strategy significantly improves the success probability of the D-Wave Two processor on the benchmark problem of antiferromagnetic chains and are planning to next use it on computationally hard problems,” Lidar says. His team includes graduate student Kristen Pudenz and postdoctoral fellow Tameem Albash.
Lidar adds that all quantum information processors are expected to be highly susceptible to decoherence, so that error correction is viewed as an essential and inescapable part of quantum computing.
Quantum processors encode data in qubits, which have the capability of representing the two digits of one and zero at the same time, as opposed to traditional bits, which can encode distinctly either a one or a zero.
This property, called superposition, along with the ability of quantum states to “interfere” (cancel or reinforce each other like waves in a pond) and “tunnel” through energy barriers, is what may one day allow quantum processors to ultimately perform optimization calculations much faster than traditional processors.
Decoherence knocks qubits out of superposition, forcing them to behave as traditional bits and robbing them of their edge over traditional processors.
Pudenz, Albash, and Lidar developed and tested a strategy of grouping three qubits together into larger blocks of encoded qubits that can be decoded by a “majority vote.” This way, if decoherence affects one of the qubits and causes it to “flip” to the incorrect value, the other two qubits in the block ensure that the data is still correctly encoded and can be correctly decoded by out-voting the errant qubit.
These voting blocks of qubits are then magnetically tied to a fourth qubit in such a way that if any one flips, then all four must flip. In effect, it makes the whole block of four so massive that it’s difficult for one lonely qubit acting under the influence of decoherence to throw a wrench in the works.
The US Army Research Office, the National Science Foundation, and the Lockheed Martin Corp. funded the research.
The post ‘Votes’ protect quantum processors from decoherence appeared first on Futurity.
Pancreatic cancer is a particularly devastating disease. At least 94 percent of patients will die within five years, and in 2013 it was ranked as one of the top 10 deadliest cancers.
Routine screenings for breast, colon, and lung cancers have improved treatment and outcomes for patients with these diseases, largely because the cancer can be detected early.Related Articles On Futurity
- Brown UniversityBlame the metal, not the nanotubes
- Brown UniversityParalyzed woman uses mind to move robot arm
- Washington University in St. LouisTiny probes light up to spot signs of disease
But because little is known about how pancreatic cancer behaves, patients often receive a diagnosis when it’s already too late.
A new low-cost device could help pathologists diagnose pancreatic cancer earlier and faster. The prototype can perform the basic steps for processing a biopsy, relying on fluid transport instead of human hands to process the tissue.
“This new process is expected to help the pathologist make a more rapid diagnosis and be able to determine more accurately how invasive the cancer has become, leading to improved prognosis,” says Eric Seibel, research professor of mechanical engineering and director of the Human Photonics Laboratory at the University of Washington.
Seibel and colleagues presented their initial results this month at the SPIE Photonics West conference and recently filed a patent for this first-generation device and future technology advancements.Simple to manufacture and use
The new instrumentation would essentially automate and streamline the manual, time-consuming process a pathology lab goes through to diagnose cancer.
Currently, a pathologist takes a biopsy tissue sample, then sends it to the lab where it’s cut into thin slices, stained and put on slides, then analyzed optically in 2D for abnormalities.
The new technology would process and analyze whole tissue biopsies for 3D imaging, which offers a more complete picture of the cellular makeup of a tumor, says Ronnie Das, a postdoctoral researcher in bioengineering who is the lead author on a related paper.
“As soon as you cut a piece of tissue, you lose information about it. If you can keep the original tissue biopsy intact, you can see the whole story of abnormal cell growth. You can also see connections, cell morphology, and structure as it looks in the body.”
The research team is building a thick, credit card-sized, flexible device out of silicon that allows a piece of tissue to pass through tiny channels and undergo a series of steps that replicate what happens on a much larger scale in a pathology lab.
The device harnesses the properties of microfluidics, which allows tissue to move and stop with ease through small channels without needing to apply a lot of external force. It also keeps clinicians from having to handle the tissue—instead, a tissue biopsy taken with a syringe needle could be deposited directly into the device to begin processing.
This is the first time material larger than a single-celled organism has successfully moved in a microfluidic device. This could have implications across the sciences in automating analyses that usually are done by humans.
Das and Chris Burfeind, an undergraduate student in mechanical engineering, designed the device to be simple to manufacture and use. They first built a mold using a petri dish and Teflon tubes, then poured a viscous, silicon material into the mold. The result is a small, transparent instrument with seamless channels that are both curved and straight.
The researchers have used the instrument to process a tissue biopsy one step at a time, following the same steps as a pathology lab would. Next, they hope to combine all of the steps into a more robust device—including 3D imaging—then build and optimize it for use in a lab.
Future iterations of the device could include layers of channels that would allow more analyses on a piece of tissue without adding more bulk to the device.
The technology could be used overseas as an over-the-counter kit that would process biopsies, then send that information to pathologists who could look for signs of cancer from remote locations. Additionally, it could potentially reduce the time it takes to diagnose cancer to a matter of minutes.
The National Science Foundation Bioengineering division and the US Department of Education Graduate Assistance in Areas of National Need program supported the project.
Source: University of Washington
The post Device could diagnose pancreatic cancer in minutes appeared first on Futurity.
Engineers have placed tiny synthetic motors inside live human cells, propelled them with ultrasonic waves, and steered them magnetically.
The nanomotors, which are rocket-shaped metal particles, move around inside the cells, spinning and battering against the cell membrane.
“As these nanomotors move around and bump into structures inside the cells, the live cells show internal mechanical responses that no one has seen before,” says Tom Mallouk, a professor of materials chemistry and physics at Penn State. “This research is a vivid demonstration that it may be possible to use synthetic nanomotors to study cell biology in new ways.
“We might be able to use nanomotors to treat cancer and other diseases by mechanically manipulating cells from the inside. Nanomotors could perform intracellular surgery and deliver drugs noninvasively to living tissues.”
Up until now, Mallouk says, nanomotors have been studied only “in vitro” in a laboratory apparatus, not in living human cells.Ultrasonic waves
Mallouk and colleagues first began experimenting with chemically powered nanomotors ten years ago. “Our first-generation motors required toxic fuels and they would not move in biological fluid, so we couldn’t study them in human cells,” Mallouk says. “That limitation was a serious problem.”Related Articles On Futurity
- Stanford UniversityMetal bits boost nanowire surface area
- McGill UniversityEnergy transport on the extreme nanoscale
- University of California, Santa BarbaraWithout ‘Notch’ signal, cells swap jobs
When Mallouk and French physicist Mauricio Hoyos discovered that nanomotors could be powered by ultrasonic waves, the door was open to studying the motors in living
For their experiments, the team used HeLa cells, an immortal line of human cervical cancer cells that typically is used in research studies. These cells ingest the nanomotors, which then move around within the cell tissue, powered by ultrasonic waves.
At low ultrasonic power, Mallouk says, the nanomotors have little effect on the cells. But when the power is increased, the nanomotors spring into action, moving around and bumping into organelles—structures within a cell that perform specific functions.
The nanomotors can act as egg beaters to essentially homogenize the cell’s contents, or they can act as battering rams to actually puncture the cell membrane.Move independent of each other
While ultrasound pulses control whether the nanomotors spin around or whether they move forward, the researchers can control the motors even further by steering them, using magnetic forces. Mallouk and his colleagues also found that the nanomotors can move autonomously—independently of one another—an ability that is important for future applications.
“Autonomous motion might help nanomotors selectively destroy the cells that engulf them,” Mallouk says. “If you want these motors to seek out and destroy cancer cells, for example, it’s better to have them move independently. You don’t want a whole mass of them going in one direction.”
The ability of nanomotors to affect living cells holds promise for medicine, Mallouk says. “One dream application of ours is Fantastic Voyage-style medicine, where nanomotors would cruise around inside the body, communicating with each other and performing various kinds of diagnoses and therapy. There are lots of applications for controlling particles on this small scale, and understanding how it works is what’s driving us.”
The National Science Foundation, the National Institutes of Health, the Huck Innovative and Transformative Seed Fund, and Penn State funded the work. The researchers’ findings are published in Angewandte Chemie International Edition.
Source: Penn State
Lyme disease is often evident by a bull’s eye rash on the skin. But it’s hard to tell from the rash whether the infection is a recent one, making it difficult to detect the disease early, when antibiotic treatment is most effective.
A new computer model captures the interactions between disease-causing bacteria and the host immune response that affect the appearance of a rash and the spread of infection.Related Articles On Futurity
- University of MinnesotaFor fish, body form follows function
- University of WashingtonHow plague sneaks past body's defense
- University at BuffaloBreast milk protein fights back superbugs
“Our findings are important because they connect how the rash looks with the behavior of the bacteria in our body,” says co-author Charles Wolgemuth, associate professor of physics and molecular cellular biology at the University of Arizona.
As reported in Biophysical Journal, Wolgemuth and graduate student Dhruv Vig developed a fairly simple mathematical model that can account for the growth and appearance of a Lyme disease rash and might be used to predict the densities of the disease-causing bacteria in relationship to the rash as a function of time during spreading.
In many cases, patients with Lyme disease develop a rash with a bull’s eye appearance. The model reveals that in these cases, the rash begins as a small and uniform rash. Activation of the immune response is strongest at the center of the rash and clears most, but not all, of the bacteria from the center within about one week.
However, bacteria at the edge of the rash continue to spread outward, further activating the immune response away from the edge. Therefore the rash grows, but the center becomes less inflamed. As time progresses, though, the bacteria resurge at the center, leading to the characteristic bull’s-eye pattern.No one-size-fits-all treatment
By revealing that the bacteria and immune cell populations change as a rash progresses, the model may help guide Lyme disease treatment.
“The model that we have developed can be used to predict how the bacteria move through our bodies and how they are affected by therapeutics,” Wolgemuth says.
To that end, the researchers simulated the progression of different rash types over the course of antibiotic treatment. For all types of Lyme disease rashes, bacteria clears from the skin within roughly the first week; however, the dynamics of disappearance of the rash varied depending on the type of rash with which the patient presented.
For example, while bull’s eye rashes resolved within a week of treatment, uniform rashes tended to be present even after four weeks, likely due to prolonged inflammation. Such differences suggest that there may not be a one-size-fits-all treatment regimen for resolving Lyme disease and its effects on the body.
There are a number of similarities between the bacterium that causes Lyme disease and the bacterium that causes syphilis, Wolgemuth says. “Therefore, it is likely that this model will also be applicable to understanding syphilis, as well as potentially other bacterial infections.”
Source: University of Arizona
The post To predict how Lyme disease spreads, look closely at the rash appeared first on Futurity.