SULAIR Home

Futurity.org

Syndicate content
Research news from top universities.
Updated: 12 min 14 sec ago

After layoffs, 1 in 5 Americans still can’t find a job

Fri, 09/26/2014 - 08:46

A new report about the lingering effects of the Great Recession finds that about 20 percent of Americans who lost their job during the last five years are still unemployed and looking for work.

Approximately half of the laid-off workers who found work were paid less in their new positions; one in four say their new job was only temporary.

Related Articles On Futurity

“While the worst effects of the Great Recession are over for most Americans, the brutal realities of diminished living standards endure for the three million American workers who remain jobless years after they were laid off,” says Carl Van Horn, a professor at Rutgers who co-authored the study with Professor Cliff Zukin.

“These long-term unemployed workers have been left behind to fend for themselves as they struggle to pull their lives back together.”

As of last August, 3 million Americans—nearly one in three unemployed workers—have been unemployed for more than six months, and more than 2 million Americans have been out of work for more than a year, the researchers say.

While the percentage of the long-term unemployed (workers who have been unemployed for more than six months) has declined from 46 percent in 2010, it is still above the 26 percent level experienced in the worst previous recession in 1983.

Job training

The national study found that only one in five of the long-term unemployed received help from a government agency when looking for a job; only 22 percent enrolled in a training program to develop skills for a new job; and 60 percent received no government assistance beyond unemployment benefits.

Nearly two-thirds of Americans support increasing funds for long-term education and training programs, and greater spending on roads and highways in order to assist unemployed workers.

For the survey, the Heldrich Center interviewed a representative sample of 1,153 Americans, including 394 unemployed workers looking for work, 389 Americans who have been unemployed for more than six months or who were unemployed for a period of more than six months at some point in the last five years, and 463 individuals who currently have jobs.

Other findings
  • More than seven in 10 long-term unemployed say they have less in savings and income than they did five years ago.
  • More than eight in 10 of the long-term unemployed rate their personal financial situation negatively as only fair or poor.
  • More than six in 10 unemployed and long-term unemployed say they experienced stress in family relationships and close friendships during their time without a job.
  • Fifty-five percent of the long-term unemployed say they will need to retire later than planned because of the recession, while 5 percent say the weak economy forced them into early retirement.
  • Nearly half of the long-term unemployed say it will take three to 10 years for their families to recover financially. Another one in five say it will take longer than that or that they will never recover.

Source: Rutgers

The post After layoffs, 1 in 5 Americans still can’t find a job appeared first on Futurity.

How birds fly through tight spots and don’t crash

Fri, 09/26/2014 - 08:02

To design a better drone, scientists could learn a thing or two from birds’ ability to maneuver through narrow spaces.

Budgerigars can fly between gaps almost as narrow as their outstretched wingspan rather than taking evasive measures such as tucking in their wings.

Related Articles On Futurity

Previous research has shown that humans unnecessarily turn their shoulders to pass through doorways narrower than 130 percent of their body width. Birds are far more precise.

“We were quite surprised by the birds’ accuracy—they can judge their wingspan within 106 percent of their width when it comes to flying through gaps,” says Ingo Schiffner, researcher at the University of Queensland Brain Institute.

“When you think about the cluttered environments they fly through, such as forests, they need to develop this level of accuracy.

“When they encounter a narrow gap, they either lift their wings up vertically or tuck them in completely, minimizing their width to that of their torso,” he says.

Can it work for drones?

The researchers wanted to know precisely how birds judge gaps between obstacles before engaging in evasive maneuvers.

In testing, budgies flew down corridors with variable widths between obstacles, and their flights were recorded with high-speed cameras for analysis.

The research, published in the journal Frontiers in Zoology, will be applied to robotics work at the Queensland Brain Institute’s Neuroscience of Vision and Aerial Robotics laboratory, Schiffner says.

“If we can understand how birds avoid obstacles, we might be able to develop algorithms for aircraft to avoid obstacles as well.

Bird brains

“For instance, urban drones used for deliveries would need to fly through complex environments such as tight alleyways or between trees at the front of homes.

“For us, it isn’t the ability to tuck in wings that is of interest if we are talking about fixed-wing or rotor aircraft, but whether we can replicate what happens neurologically in birds as they navigate.”

To judge airspeed, budgies use optic flow—the rate visual cues pass by the eyes. They don’t see three-dimensionally like humans, due to the lateral placement of their eyes and lack of binocular overlap.

“Seeing in three dimensions requires two eyes or cameras with sufficient visual overlap, so using optic flow with just one camera would be very useful, saving weight and keeping autonomous vehicles small.”

Source: University of Queensland

The post How birds fly through tight spots and don’t crash appeared first on Futurity.

Memories, not emotions, fade for people with Alzheimer’s

Fri, 09/26/2014 - 07:35

People with Alzheimer’s disease can feel lingering emotions after an event even though they may not remember it.

Researchers showed individuals with Alzheimer’s disease clips of sad and happy movies. Despite not being able to remember the movies, they experiences sustained states of sadness and happiness.

“This confirms that the emotional life of an Alzheimer’s patient is alive and well,” says lead author Edmarie Guzmán-Vélez, a doctoral student in clinical psychology at University of Iowa.

“Our findings should empower caregivers by showing them that their actions toward patients really do matter,” Guzmán-Vélez says.

Guzmán-Vélez conducted the study with Daniel Tranel, professor of neurology and psychology, and Justin Feinstein, assistant professor at the University of Tulsa and the Laureate Institute for Brain Research.

Earlier research predicted the importance of attending to the emotional needs of people with Alzheimer’s, which is expected to affect as many as 16 million people in the United States by 2050 and cost an estimated $1.2 trillion.

“It’s extremely important to see data that support our previous prediction,” says Daniel Tranel, professor of neurology and psychology. “Edmarie’s research has immediate implications for how we treat patients and how we teach caregivers.”

Emotions linger

Despite the considerable amount of research aimed at finding new treatments for Alzheimer’s, no drug has succeeded at either preventing or substantially influencing the disease’s progression. Results of the new study highlight the need to develop new caregiving techniques aimed at improving the well-being and minimizing the suffering for the millions of individuals afflicted with Alzheimer’s.

Related Articles On Futurity

For the study, published in Cognitive and Behavioral Neurology, 17 patients with Alzheimer’s disease and 17 healthy comparison participants watched 20 minutes of sad and then happy movies. The movie clips triggered the expected emotion: sorrow and tears during the sad films and laughter during the happy ones.

About five minutes after watching the movies, the participants were tested to see if they could recall what they had just seen. As expected, the patients with Alzheimer’s disease retained significantly less information about both the sad and happy films than the healthy people. In fact, four patients were unable to recall any factual information about the films, and one patient didn’t even remember watching any movies.

Before and after seeing the films, participants answered questions to gauge their feelings. Patients with Alzheimer’s disease reported elevated levels of either sadness or happiness for up to 30 minutes after viewing the films despite having little or no recollection of the movies.

Visit, joke, dance

Surprisingly, the less the patients remembered about the films, the longer their sadness lasted. While sadness tended to last a little longer than happiness, both emotions far outlasted the memory of the films.

The fact that forgotten events can continue to exert a profound influence on a patient’s emotional life highlights the need for caregivers to avoid causing negative feelings and to try to induce positive ones.

“Our findings should empower caregivers by showing them that their actions toward patients really do matter,” Guzmán-Vélez says. “Frequent visits and social interactions, exercise, music, dance, jokes, and serving patients their favorite foods are all simple things that can have a lasting emotional impact on a patient’s quality of life and subjective well-being.”

Justin Feinstein, assistant professor at the University of Tulsa, is a coauthor of the study.

The National Institute of Neurological Disorders and Stroke, a National Science Foundation Graduate Research Fellowship awarded to Guzmán-Vélez, Kiwanis International, the Fraternal Order of Eagles, an American Psychological Association of Graduate Students Basic Psychological Research Grant, and the William K. Warren Foundation supported the research.

Source: University of Iowa

The post Memories, not emotions, fade for people with Alzheimer’s appeared first on Futurity.

Half of Earth’s water may be older than the sun

Fri, 09/26/2014 - 07:16

Up to half of the water on Earth is likely older than the solar system itself, according to new research that helps to settle a debate about just how far back in galactic history our planet and our solar system’s water formed.

The debate centers around several questions: were the molecules in comet ices and terrestrial oceans born with the system itself—in the planet-forming disk of dust and gas that circled the young sun 4.6 billion years ago?

Or did the water originate even earlier—in the cold, ancient molecular cloud that spawned the sun and that planet-forming disk?

Between 30 and 50 percent came from the molecular cloud, says Ilse Cleeves, a doctoral student in astronomy at University of Michigan.

To arrive at that estimate, Cleeves and Ted Bergin, professor of astronomy, simulated the chemistry that went on as our solar system formed. They focused on the ratio of two slightly different varieties of water—the common kind and a heavier version. Today, comets and Earth’s oceans hold particular ratios of heavy water—higher ratios than the sun contains.

“Chemistry tells us that Earth received a contribution of water from some source that was very cold—only tens of degrees above absolute zero, while the sun being substantially hotter has erased this deuterium, or heavy water, fingerprint,” Bergin says.

How common is water?

To start their solar system simulation, the scientists wound back the clock and zeroed out the heavy water. They hit “go” and waited to see if eons of solar system formation could lead to the ratios they see today on Earth and in comets.

“We let the chemistry evolve for a million years—the typical lifetime of a planet-forming disk—and we found that chemical processes in the disk were inefficient at making heavy water throughout the solar system,” Cleeves says. “What this implies is if the planetary disk didn’t make the water, it inherited it. Consequently, some fraction of the water in our solar system predates the sun.”

All life on Earth depends on water. Understanding when and where it came from can help scientists estimate how common water might be throughout the galaxy.

“The implications of these findings are pretty exciting,” Cleeves says. “If water formation had been a local process that occurs in individual stellar systems, the amount of water and other important chemical ingredients necessary for the formation of life might vary from system to system.

“But because some of the chemically rich ices from the molecular cloud are directly inherited, young planetary systems have access to these important ingredients.

“Based on our simulations and our growing astronomical understanding, the formation of water from hydrogen and oxygen atoms is a ubiquitous component of the early stages of stellar birth,” Bergin says.

“It is this water, which we know from astronomical observations forms at only 10 degrees above absolute zero before the birth of the star, that is provided to nascent stellar systems everywhere.”

The findings are reported in the journal Science.

Source: University of Michigan

The post Half of Earth’s water may be older than the sun appeared first on Futurity.

Computer recreates powerful solar flares

Fri, 09/26/2014 - 06:47

Physicists have used computers to model solar explosions and hope the work will lead to better ways to predict flares, which can disable power grids and communications on Earth.

The computer model demonstrates that the shorter the interval between two explosions in the solar atmosphere, the more likely it is that the second flare will be stronger than the first one.

“The agreement with measurements from satellites is striking,” write the researchers from ETH Zurich in the journal Nature Communications.

Hans Jürgen Herrmann, a professor at the Institute for Building Materials, says solar flares were not the original focus of the work. A theoretical physicist and expert in computer physics, Herrmann developed a method to examine phenomena from a range of diverse fields. Similar patterns to those in solar flares can also be found in earthquakes, avalanches, or the stock market.

“Solar explosions do not, of course, have any connection with stock exchange rates,” says Hermann. Nevertheless, they do behave in a similar way: they can interlock until they reach a certain threshold value before discharging.

A system therefore does not continuously release the mass or energy fed into it, but only does so in bursts, Herrmann explains.

Pile of sand

Experts call this self-organized criticality. One example for this is a pile of sand being created by a trickle of sand grains. The pile continues to grow until, every now and then, an avalanche is triggered. Smaller landslides occur more frequently than larger ones.

Related Articles On Futurity

By organizing itself around a so-called critical state, the pile maintains its original height when viewed over an extended period of time.

In the case of solar flares, the build-up of magnetic energy is emitted in sudden bursts. The sun consists of hot plasma made of electrons and ions. Magnetic field lines extend from the solar surface all the way into the corona. Moving and twisting bundles of field lines form magnetic flux tubes.

When two tubes intersect, they merge (physicists call this reconnection), causing an explosion that gives off large quantities of heat and electromagnetic radiation. The affected solar area lights up as a solar flare.

The radiation extends across the entire electromagnetic spectrum, from radio waves and visible light to X-rays and gamma rays.

Observations suggest that the solar flares’ size distributions show a certain degree of regularity, statistically speaking.

“Events can be arbitrarily large but they are also arbitrarily rare”, says Herrmann. In mathematical terms, it is a scale-free energy distribution that follows a power law.

A turbulent system

Conventional computer models have been able to qualitatively reconstruct this statistic size distribution, but unable to make any quantitative predictions. Any model relying on the intersection of flux tubes and therefore based on self-organized criticality neglects one important fact, Herrmann points out: “the system is turbulent”.

The magnetic field lines in the corona do not move in a random pattern but are rooted in the photosphere’s turbulent plasma, whose behavior is described in terms of fluid dynamics—the science of the movement of fluids and gases.

However, calculations based solely on plasma turbulence were also unable to reproduce the occurrence of solar flares in full.

Herrman and his team combined self-organized criticality with fluid dynamics and reached a breakthrough.

“We have been able to reproduce the overall picture of how solar flares occur,” the researcher says.

Using a supercomputer, the team was able to show that the model consistently generated correct results even when changing details such as the number of flux tubes or the energy of the plasma. Unlike earlier attempts by other researchers, their results corresponded with the observations in a quantitative sense as well.

Their conclusion is that “turbulence and interaction between the magnetic flux tubes are essential components which control the occurrence of solar flares.” Demonstrating such temporal-energetic correspondences is the first step towards a prediction model.

However, Herrmann cautions, “our predictions are statistical.” In other words, they can only predict probabilities, while the prediction of individual events remains impossible.

Source: ETH Zurich

The post Computer recreates powerful solar flares appeared first on Futurity.

Biochar changes how water flows through soil

Thu, 09/25/2014 - 13:45

New research could help settle the debate about one of biochar’s biggest benefits—its seemingly contradictory ability to make clay soils drain faster and sandy soils drain slower.

As more gardeners and farmers add ground charcoal, or biochar, to soil to both boost crop yields and counter global climate change, the study offers the first detailed explanation for this mystery.

Related Articles On Futurity

“Understanding the controls on water movement through biochar-amended soils is critical to explaining other frequently reported benefits of biochar, such as nutrient retention, carbon sequestration, and reduced greenhouse gas emissions,” says lead author Rebecca Barnes, an assistant professor of environmental science at Colorado College, who began the research as a postdoctoral research associate at Rice University.

Biochar can be produced from waste wood, manure, or leaves, and its popularity among DIY types and gardening buffs took off after archaeological studies found that biochar added to soils in the Amazon more than 1,000 years ago was still improving the water- and nutrient-holding abilities of those poor soils today.

Studies over the past decade have found that biochar soil amendments can either increase or decrease the amount of water that soil holds, but it has been tough for experts to explain why this occurs, due partly to conflicting results from many different field tests.

Comparing ‘apples to apples’

In the new study, biogeochemists at Rice conducted side-by-side tests of the water-holding ability of three soil types—sand, clay, and topsoil—both with and without added biochar.

The biochar used in the experiments, derived from Texas mesquite wood, was prepared to exacting standards in the lab of Rice geochemist Caroline Masiello, a study coauthor, to ensure comparable results across soil types.

“Not all biochar is created equal, and one of the important lessons of recent studies is that the hydrological properties of biochar can vary widely, depending on the temperature and time in the reactor,” Masiello says.

“It’s important to use the right recipe for the biochar that you want to make, and the differences can be subtle. For scientific studies, it is critical to make sure you’re comparing apples to apples.”

Pathways for water

Barnes says the team chose to make its comparison with simple, relatively homogenous soil materials to compare results to established hydrologic models that relate water flow to a soil’s physical properties, like bulk density and porosity.

“This is what helped us explain the seeming disconnect that people have noted when amending soils with biochar,” she says. “Biochar is light and highly porous. When biochar is added to clay, it makes the soil less dense and it increases hydraulic conductivity, which makes intuitive sense.

“Adding biochar to sand also makes it less dense, so one would expect that soil to drain more quickly as well; but in fact, researchers have found that biochar-amended sand holds water longer.”

Study coauthor Brandon Dugan, assistant professor of Earth science at Rice, says, “We hypothesize that this is likely due to the presence of two flow paths for water through soil-biochar mixtures. One pathway is between the soil and biochar grains, and a second pathway is water moving through the biochar itself.”

Barnes says the highly porous structure of biochar makes each of these pathways more tortuous than the pathway that water would take through sand alone. Moreover, the surface chemistry of biochar—both on external surfaces and inside pores—is likely to promote absorption and further slow the movement of water.

“By adding our results to the growing body of literature, we show that when biochar is added to sand or other coarse-grained soils, there is a simultaneous decrease in bulk density and hydraulic conductivity, as opposed to the expected result of decreased bulk density correlated with increased hydraulic conductivity that has been observed for other soil types,” Barnes says.

Study coauthors include co-first author Morgan Gallagher, a former Rice graduate student who is now a postdoctoral researcher at Rice and an associate in research at Duke University’s Center for Global Change, and Rice graduate student Zuolin Liu. The findings appear in PLOS ONE.

Source: Rice University

The post Biochar changes how water flows through soil appeared first on Futurity.

How battlefield burns lead to abnormal bone growth

Thu, 09/25/2014 - 12:52

A new anti-inflammatory treatment may help prevent what has become one of the war-defining injuries for today’s troops.

Soldiers burned by high-velocity explosive devices are at-risk for heterotopic ossification (HO), in which bone develops in places it shouldn’t be—outside the skeleton, in joints, muscles, and tendons.

Related Articles On Futurity

The painful condition can make it difficult to move and function and commonly affects patients who suffer burns, automobile accidents, orthopedic surgery, and blast injuries and other combat wounds.

New research shows how and why HO develops and reveals a potential method to interrupt the cell signaling that leads to abnormal bone growth.

Using tissue from burn patients and a mouse model of trauma-induced HO, researchers analyzed the body’s response to burn injury. They confirmed the link between burn injury and activity of ATP, a primary energy source for cells that, when elevated, can make reactions normally impossible in biological conditions, possible—such as ectopic, or abnormal, bone.

By using an apyrase, a compound capable of breaking down ATP, researchers were able to reduce heterotopic ossification, according to study findings published in Science Translational Medicine.

Additional study will be required to develop an HO prevention therapy for humans that can be tested for safety and effectiveness.

Beyond burns

“These findings are exciting and suggest that localized anti-inflammatory treatment may help prevent future development of ectopic bone, even at sites distant from the trauma,” says lead author Benjamin Levi, a plastic and reconstructive surgeon at the University of Michigan.

Levi is director of the Burn/Wound and Regenerative Medicine Laboratory at the University of Michigan where research teams are focused on prevention and early diagnosis of HO before it’s visible on x-rays and CT scans.

In addition to burn and trauma patients who are at risk for HO, more than one million patients a year undergo joint replacement in the United States and 20 percent of these patients will develop HO.

Authors of an accompanying editorial suggest study of an apyrase treatment for heterotopic ossification formation go beyond high-risk burn patients. Additional research may reveal whether the treatment would help those suffering traumatic brain, orthopedic, and spinal cord injuries.

The debilitating bone condition has complicated more than 60 percent of severe wartime orthopedic casualties during the Afghanistan and Iraqi conflicts, impacting more than 1,200 veterans.

Other researchers from University of Michigan and from the Naval Medical Research Center, University of Texas Medical Branch, and Massachusetts General Hospital contributed to the study.

The research was funded in in part by a Plastic Surgery Foundation National Endowment Award.

Source: University of Michigan

The post How battlefield burns lead to abnormal bone growth appeared first on Futurity.

Does video evidence make bias stronger?

Thu, 09/25/2014 - 12:49

Where people look as they watch video evidence varies wildly and has a big impact on bias in legal punishment decisions, report researchers.

The study raises questions about why people fail to be objective when confronted with video evidence.

In a series of three experiments, participants who viewed videotaped altercations formed biased punishment decisions about a defendant the more they looked at him.

Participants punished a defendant more severely if they did not identify with his social group and punished him less severely if they felt connected to the group—but only when they looked at the defendant often.

“Our findings show that video evidence isn’t evaluated objectively—in fact, it may even spur our existing biases,” explains study author Emily Balcetis, an assistant professor in New York University’s psychology department.

“With the proliferation of surveillance footage and other video evidence, coupled with the legal system’s blind faith in information we can see with their own eyes, we need to proceed with caution. Video evidence is seductive, but it won’t necessarily help our understanding of cases, especially when it’s unclear who is at fault.”

The research appears in the Journal of Experimental Psychology: General.

Identification with the police

In the first pair of experiments, which included 152 participants, researchers gauged the participants’ identification with police. This was done by reading a series of statements (e.g., “Your background is similar to that of most police officers”), then measuring, on a seven-point scale, the participants’ level of agreement or disagreement with the statement (1=strong disagreement; 7=strong agreement).

Related Articles On Futurity

The participants then watched a 45-second muted video depicting an actual altercation between a police officer and a civilian, in which officer wrongdoing was ambiguous.

In it, the officer attempted to handcuff the resisting civilian. The officer and civilian struggled before the officer pushed the civilian against his cruiser. The civilian bit the officer’s arm, after which the officer hit the back of the civilian’s head. In order to determine if the altercation was indeed seen as ambiguous, a separate group of participants viewed the tape and evaluated it as such.

During these viewings, the researchers also monitored the participants’ eye movements without their awareness. Using eye-tracking technology, they gauged how many times participants fixated their gaze on the police officer.

After the viewings, researchers examined how the participants interpreted what they saw. Participants reported on the legal facts of the case, which would incriminate the police officer.

Deciding on punishment

In addition, participants imagined they were jurors in a court case in which the police officer was on trial for these actions and indicated the likelihood that they would require that the officer be punished and pay a fine (1=extremely unlikely, 6=extremely likely).

Their results showed that participants’ identification with the police officer influenced punishment decisions only if they focused their visual attention on the law-enforcement official.

Specifically, their results showed that frequently looking at the police officer exacerbated discrepancies in punishment decisions among participants.

For instance, among participants who looked frequently at the police officer, the degree to which they identified with his social group predicted biased punishment decisions. Participants punished the officer far more severely if they did not identify with his group than if they did. By contrast, among participants who looked less often at the officer, group identification did not affect punishment decisions. Attention shifted punishment decisions by changing participants’ interpretations of the legal facts of the case.

In a second experiment, the same participants viewed another video depicting an altercation between a police officer and a civilian—one in which culpability, verified by an independent panel of participants, was ambiguous.

In it, the police officer, wearing a gun and using relatively minimal force, spoke with a civilian in a subway stairwell. The civilian flinched, moving toward the officer, who wrestled him to the ground.

Here, some participants were asked to focus their attention on the police officer while others were asked to focus their attention on the civilian—a tracking of participants’ eye movements confirmed they complied with these instructions.

The results echoed those of the first experiment. Those who followed directions to pay closer attention to the police officer rather than to the civilian saw his actions as more incriminating and sought to punish him more severely if they felt little social connection to police officers. In other words, close attention to the videotape enhanced participants’ pre-existing biases of police rather than diminishing them.

“One might think that the more closely you look at videotape, the more likely you are to view its contents objectively,” says Balcetis. “But that is not the case—in fact, the more you look, the more you find evidence that confirms your assumptions about a social group, in this case police.”

Blue and green groups

In order to rule out the possibility that these findings apply only to police, the researchers conducted another experiment with a new set of participants.

This time, however, they watched a videotape of an orchestrated fight between two college-aged white men: one wearing a blue shirt and another wearing a green shirt. Prior to viewing the videotape, participants answered personality questions, and the experimenter told them their answers seemed more similar to either the blue group or the green group.

Consistent with the first two experiments, the results showed that close visual attention enhanced biased interpretations of what transpired and influenced punishment decisions.

For instance, those who fixated more on the outgroup member (blue or green) were more likely to recommend stiffer punishment than those who looked elsewhere. Again, attention shifted punishment decisions by changing the accuracy of participants’ memory of the behaviors that the outgroup member performed.

“We think video evidence is a silver bullet for getting at truth, but it’s not,” observes lead author Yael Granot, a doctoral candidate at NYU.

“These results suggest that the way in which people view video evidence may exaggerate an already pervasive ‘us versus them’ divide in the American legal system.”

The study’s other authors included NYU undergraduate Kristin Schneider and Tom Tyler, a professor at Yale Law School.

Source: NYU

The post Does video evidence make bias stronger? appeared first on Futurity.

The blue pixels on screens can now live longer

Thu, 09/25/2014 - 09:04

Researchers have extended the lifetime of blue organic light emitting diodes by a factor of 10.

This advance could lead to longer battery life in smartphones and lower power consumption for large-screen televisions.

Blue OLEDs are one of a trio of colors used in OLED displays such as smartphone screens and high-end TVs. The improvement means that the efficiencies of blue OLEDs in these devices could jump from about 5 percent to 20 percent or better in the near future.

Related Articles On Futurity

OLEDs are the latest and greatest in television technology, allowing screens to be extremely thin and even curved, with little blurring of moving objects and a wider range of viewing angles.

In these “RGB” displays, each pixel contains red, green, and blue modules that shine at different relative brightness to produce any color desired.

But not all OLEDs are created equal. Phosphorescent OLEDs, also known as PHOLEDs, produce light through a mechanism that is four times more efficient than fluorescent OLEDs. Green and red PHOLEDs are already used in these new TVs—as well as in Samsung and LG smartphones—but the blues are fluorescent.

“Having a blue phosphorescent pixel is an important challenge, but they haven’t lived long enough,” says Stephen Forrest, professor of engineering at the University of Michigan.

He and his colleagues demonstrated the first PHOLED in 1998 and the first blue PHOLED in 2001.

Now, with their new results, Forrest and his team hope that is about to change. Efficient blues will make a significant dent in power consumption for large-screen TVs and extend battery life in smartphones.

The lifetime improvement will also help prevent blue from dimming relative to red and green over time.

“In a display, it would be very noticeable to your eye,” Forrest says.

Why blue works differently

In collaboration with researchers at Universal Display Corp. in 2008, Forrest’s group proposed an explanation for why blue PHOLEDs’ lives are short. The team showed that the high energies required to produce blue light are more damaging when the brightness is increased to levels needed for displays or lighting.

This is because a concentration of energy on one molecule can combine with that on a neighbor, and the total energy is enough to break up one of the molecules. It’s less of a problem in green- and red-emitting PHOLEDs because it takes lower energies to make these colors of light.

“That early work showed why the blue PHOLED lifetime is short, but it didn’t provide a viable strategy for increasing the lifetime,” says Yifan Zhang, a recent graduate from Forrest’s group and first author on the new study. “We tried to use this understanding to design a new type of blue PHOLED.”

Spreading out the ‘bad energy’

The solution, demonstrated by Zhang and Jae Sang Lee, a current doctoral student in Forrest’s group, spreads out the light-producing energy so that molecules aren’t as likely to experience the bad synergy that destroys them.

The blue PHOLED consisted of a thin film of light-emitting material sandwiched between two conductive layers—one for electrons and one for holes, the positively charged spaces that represent the absence of an electron. Light is produced when electrons and holes meet on the light-emitting molecules.

If the light-emitting molecules are evenly distributed, the energetic electron-hole pairs tend to accumulate near the layer that conducts electrons, causing damaging energy transfers.

Instead, the team arranged the molecules so that they were concentrated near the hole-conducting layer and sparser toward the electron conductor. This drew electrons further into the material, spreading out the energy.

The new distribution alone extended the lifetime of the blue PHOLED by three times. Then, the team split their design into two layers, halving the concentration of light-emitting molecules in each layer. This configuration increased the lifetime tenfold.

This research appears in Nature Communications. Forrest is also a professor of electrical engineering, physics, and materials science and engineering.

Universal Display supported the work and has licensed it for commercialization.

Source: University of Michigan

The post The blue pixels on screens can now live longer appeared first on Futurity.

Can graphene transform an apple into a doughnut?

Thu, 09/25/2014 - 08:52

More than 50 years ago, a Russian physicist predicted that it’s possible to transform from one topography to another. The phenomenon is known as the Lifshitz transition, and now researchers have used a double layer of graphene to demonstrate that it is indeed possible.

“We were able to prove the existence of a Lifshitz transition,” says Anastasia Varlet, a doctoral candidate at ETH Zurich who was part of the research team that made the discovery.

Related Articles On Futurity

The physicist uses the example of a coffee cup and a water glass to explain what this means. A cup has a handle with a hole. Using mathematical functions, it is possible to transform a geometrically designed object from the form of a cup to that of a doughnut, given that a doughnut also features a hole.

A glass, on the other hand, can not be reshaped into a doughnut because it does not have a hole. Mathematically speaking, a cup has the same topology as a doughnut.

“A glass is topologically the same as an apple,” explains Professor Klaus Ensslin, who led the research detailed in two papers published in Physical Review Letters. (View the first paper and the second paper.)

Changing the topology of an object can improve its usefulness, for example, by transforming a beaker into a cup with handle. In reality, this should not be possible at all; nevertheless, the researchers have achieved exactly that by using a double layer of graphene.

The Lifshitz transition does not apply to objects in our normal environment; rather, the physicists are researching an abstract topology of surfaces with which the energy state of electrons is described with electronic materials. In particular, the researchers examined surfaces of constant energy, as these determine the conductivity of the material and its application potential.

Three islands in a lake

Ensslin makes another comparison to demonstrate the mathematical concept behind these energy surfaces: “Imagine a hilly landscape in which the valleys fill up with electrical charges, just as the water level rises between the hills when it rains.”

This is how a conductive material is formed from an initial isolator–when it stops raining, the water has formed a lake from which the individual hilltops emerge like islands. This is exactly what Varlet observed when experimenting with the double layer of graphene: at a low water level, there are three independent, but equivalent lakes. When the water level increases, the three lakes join to form a large ocean.

“The topology has changed altogether,” Varlet concludes. In other words, this is how a doughnut is transformed into an apple.

Until now, scientists have lacked the right material to be able to demonstrate a Lifshitz transition in an experiment. Metals are not suitable and initially the ETH team was unaware it had found the material that others had been looking for.

“We observed something strange in our measurements with the graphene sandwich construction that we were not able to explain,” says Varlet.

A Russian theoretician, Vladimir Falko, was able to interpret these measurements in discussions with the team.

Low-cost raw materials

To produce the sandwich construction, Varlet enclosed the double layer of graphene in two layers of boron nitride, a material otherwise used for lubrication and one that has an extremely smooth surface.

Although both materials are cheap, a lot of work is required in the cleanroom–the carbon flakes must be exceptionally clean to produce a functioning component.

“A significant part of my work consists of cleaning the graphene,” says Varlet. The samples can withstand enormously strong electrical fields.

At present, a practical use for the phenomenon is speculation only. The topology of quantum states, for example, offers a way of decoupling them from their environment and perhaps achieving extremely stable quantum states that can be used for information processing.

The team is part of the research group Quantum Science and Technology, which comprises groups from the universities of Basel, Lausanne, Geneva and ETH Zurich, and representatives from IBM.

Source: ETH Zurich

The post Can graphene transform an apple into a doughnut? appeared first on Futurity.

Bacterial ‘chatter’ tells cancer cells to die

Thu, 09/25/2014 - 07:15

Scientists can manipulate a molecule used as a communication system by bacteria to prevent cancer from spreading. This communication system can tell cells how to act—or even to die—on command.

While always dangerous, cancer becomes life-threatening when cancer cells begin to spread to different areas throughout the body.

Related Articles On Futurity

“During an infection, bacteria release molecules which allow them to ‘talk’ to each other,” says lead author Senthil Kumar, an assistant research professor and assistant director of the Comparative Oncology and Epigenetics Laboratory at University of Missouri.

“Depending on the type of molecule released, the signal will tell other bacteria to multiply, escape the immune system, or even stop spreading.

“We found that if we introduce the ‘stop spreading’ bacteria molecule to cancer cells, those cells will not only stop spreading, they will begin to die as well.”

Hard-to-kill cancer cells

In the study published in PLOS ONE, Kumar and coauthor Jeffrey Bryan, associate professor in the College of Veterinary Medicine, treated human pancreatic cancer cells grown in culture with bacterial communication molecules, known as ODDHSL. After the treatment, the pancreatic cancer cells stopped multiplying, failed to migrate, and began to die.

“We used pancreatic cancer cells because those are the most robust, aggressive, and hard-to-kill cancer cells that can occur in the human body,” Kumar says.”To show that this molecule can not only stop the cancer cells from spreading, but actually cause them to die, is very exciting.

“Because this treatment shows promise in such an aggressive cancer like pancreatic cancer, we believe it could be used on other types of cancer cells and our lab is in the process of testing this treatment in other types of cancer.”

The next step is to find a more efficient way to introduce the molecules to the cancer cells before animal and human testing can take place, Kumar says.

“Our biggest challenge right now is to find a way to introduce these molecules in an effective way. At this time, we only are able to treat cancer cells with this molecule in a laboratory setting. We are now working on a better method which will allow us to treat animals with cancer to see if this therapy is truly effective.

“The early-stage results of this research are promising. If additional studies, including animal studies, are successful then the next step would be translating this application into clinics.”

Source: University of Missouri

The post Bacterial ‘chatter’ tells cancer cells to die appeared first on Futurity.

Invasive plant beats ‘weapons’ but not goats

Thu, 09/25/2014 - 07:04

Field tests support the use of herbivores, not herbicides, to rout out an invasive plant threatening East Coast salt marshes.

Phragmites australis, or the common reed, is a rapid colonizer that has overrun many coastal wetlands from New England to the Southeast.

A non-native perennial, it can form dense stands of grass up to 10 feet high that block valuable shoreline views of the water, kill off native grasses, and alter marsh function.

Related Articles On Futurity

Land managers traditionally have used chemical herbicides to slow phragmites’ spread but with only limited and temporary success.

Now, field experiments have identified a more sustainable, low-cost alternative: goats.

“We find that allowing controlled grazing by goats or other livestock in severely affected marshes can reduce the stem density of phragmites cover by about half in around three weeks,” says Brian R. Silliman, lead author of the new study and associate professor of marine conservation biology at Duke University’s Nicholas School of the Environment.

“The goats are likely to provide an effective, sustainable, and much more affordable way of mowing down the invasive grass and helping restore lost ocean views,” he says.

Helicopters and bulldozers

In fenced-in test plots at the USDA Beltsville Agricultural Research Center in Maryland, Silliman and his colleagues found that a pair of the hungry herbivores could reduce phragmites cover from 94 percent to 21 percent, on average, by the end of the study.

Separate trials showed that horses and cows would also readily eat the invasive grass.

In addition to restoring views, the controlled grazing allowed native plant species to re-establish themselves in the test plots over time. The native species diversity index increased five-fold.

“For more than two decades, we’ve declared major chemical and physical warfare on this grass, using all the latest manmade weapons,” Silliman says. “We’ve used helicopters to spray it with herbicides and bulldozers to remove its roots. More often than not, however, it returns.

“In this study, we show that sustainable, low-cost rotational livestock grazing can suppress the unwanted tall grass and favor a more diverse native plant system,” he says.

Silliman says the re-emergence of native marsh plants could happen even faster and be more sustained if managers combine grazing with the selective use of herbicides to eradicate any remaining phragmites and then re-plant native species in its place.

The research findings appear this week in the online journal PeerJ.

A four-way win

“This could be a win-win-win-win situation,” Silliman says. Marshes win because native diversity and function is largely restored. Farmers benefit because they receive payment for providing the livestock and they gain access to free pasture land. Managers win because control costs are reduced. Communities and property owners win because valuable and pleasing water views are brought back.

The approach has been used for nearly 6,000 years in parts of Europe and recently has been successfully tested on small patches of heavily phragmites-invaded marshes in New York, he notes. “Now, it just has to be tested on a larger spatial scale.”

The only drawback, he adds, is that “people have to be okay with having goats in their marsh for a few weeks or few months in some years. It seems like a fair trade-off to me.”

Funding for the study came from the Netherland Royal Academy of Arts and Sciences and the Maryland Agricultural Experiment Station.

The study included researchers from Bryn Mawr College; the University of Florida; the University of Maryland; the University of Groningen, Netherlands; PUCCIMAR Ecological Research and Consultancy; and the Royal Netherlands Institute for Sea Research.

Source: Duke University

The post Invasive plant beats ‘weapons’ but not goats appeared first on Futurity.

Breast milk may put preemies at risk of deadly virus

Thu, 09/25/2014 - 06:07

Because of their immature immune systems, premature babies—especially those born with very low birth weight (VLBW)—are particularly vulnerable to a virus infection from breast milk.

Blood transfusions are also a source of cytomegalovirus (CMV) infection, which can cause serious disease and, in severe cases, lead to death.

Scientists haven’t systematically examined either breast milk or blood transfusions in a large enough study, however, to quantify the specific risks of infection and identify risk factors to help guide prevention strategies, researchers say.

In a new study published in JAMA Pediatrics, researchers confirm that the common strategy of transfusing blood products to these VLBW infants that are CMV-seronegative and leukoreduced (blood products with white blood cells removed) effectively prevents transmission of CMV from blood transfusion.

Primary source: breast milk

Using this transfusion approach, maternal breast milk becomes the primary source of postnatal CMV infection among VLBW infants.

Related Articles On Futurity

The prospective clinical study enrolled 462 mothers and 539 VLBW infants in three neonatal intensive care units between January 2010 and June 2013. A majority of mothers had a history of CMV infection prior to delivery (CMV sero-prevalence of 76.2 percent).

The infants were enrolled within five days of birth, and at the time of enrollment they had not received a blood transfusion. The infants were tested at birth to evaluate for congenital infection, and again at five additional intervals between birth and 90 days, discharge, or death.

A total of 29 out of the 539 enrolled infants were found to have CMV infection (cumulative incidence of 6.9 percent at 12 weeks). Five infants with CMV infection developed severe disease or died. Although 2,061 transfusions were administered to 310 of the infants (57.5 percent), the blood products were CMV-seronegative and leukoreduced, and none of the CMV infections was linked to transfusion.

Twenty-seven of 28 infections acquired after birth occurred among infants fed CMV-positive breast milk. The authors estimate that between 1 in 5 and 1 in 10 VLBW infants who are fed CMV positive breast milk from mothers with a history of CMV infection will develop postnatal CMV infection.

“We believe our study is the largest evaluation of both blood transfusion and breast-milk sources of postnatal CMV infection in VLBW infants,” says first author Cassandra Josephson from the Center for Transfusion and Cellular Therapies at Emory University School of Medicine and Children’s Healthcare of Atlanta.

Benefits outweigh risks

“Previously, the risk of CMV infection from blood transfusion of seronegative or leukoreduced transfusions was estimated to be 1 to 3 percent. We showed that using blood components that are both CMV-seronegative and leukoreduced, we can effectively prevent the transfusion-transmission of CMV. Therefore, we believe that this is the safest approach to reduce the risk of CMV infection when giving transfusions to VLBW infants.

The American Academy of Pediatrics currently states that the value of routinely feeding breast milk from CMV seropositive mothers to preterm infants outweighs the risks of clinical disease from CMV. Although breast milk is the best source of nutrition for preterm infants, new strategies to prevent breast milk transmission of CMV are needed because freezing and thawing breast milk did not completely prevent transmission in the current study.

Alternative approaches to prevent breast milk transmission of CMV, could include routine CMV-serologic testing of pregnant mothers to enable counseling regarding the risk of infection; closer surveillance of infants with CMV-positive mothers; and pasteurization of breast milk until a corrected gestational age of 34 weeks (as recommended by the Austrian Society of Pediatrics).

Although most infants who develop CMV infection are asymptomatic in the neonatal period, a minority progress to develop serious symptoms. Routine screening for postnatal CMV infection may be one potential strategy to help identify these infants before they go on to develop symptomatic disease.

Although the effect of asymptomatic postnatal CMV infection on long-term neurodevelopmental outcomes is not clear, the frequency of CMV infection in this study raises significant concern about the potential consequences of CMV infection among VLBW infants and points to the need for large, long-term follow-up studies of neurological outcomes in infants with postnatal CMV infection.

Other researchers from Emory and from the New York Blood Center and Northside Hospital in Atlanta are coauthors of the study.

Source: Emory University

The post Breast milk may put preemies at risk of deadly virus appeared first on Futurity.

How doctors can get LGBT teens to open up

Thu, 09/25/2014 - 05:53

When doctors talk to teens about sex, only about three percent of them do so in a way that encourages LGBT teens to discuss their sexuality, a new study reports.

“Physicians are making their best efforts, but they are missing opportunities to create safe environments for teenagers to discuss sexuality and their health,” says lead investigator Stewart C. Alexander, an associate professor of consumer science at Purdue University who focuses on health communication.

Related Articles On Futurity

“What the doctor asks or brings up about sexuality sets the tone, and gay and lesbian youth are very good about reading adults to determine who is safe to confide in. They ask themselves, ‘Can I disclose this information to this adult?'”

Physicians are encouraged to discuss teenage sexuality during wellness visits per the American Academy of Pediatrics recommendations.

But researchers say these conversations need to be more than a simple phrase and doctors should consider the whole conversation. Physicians can undo any good if they aren’t inclusive.

“Open, inclusive conversations can help youths realize there is no threat, and this can be a great start for building trust with the physician who is someone they are likely to see year after year,” says co-investigator Cleveland Shields, associate professor of human development and family studies.

“These adolescents, especially the younger ones, may not have established a sexual identity, their sexuality is in flux, or they may be romantically involved with someone of the same gender but not identify themselves as gay or lesbian.”

Inclusive conversations

For the study, published in LGBT Health, researchers looked at patterns in physicians’ conversations about sex when speaking to patients ages 12-17. The data was collected at 11 clinics in the Raleigh/Durham, North Carolina, area as part of the Duke Teen CHAT project.

The analysis is based on recorded conversations between 49 physicians and 293 adolescents during annual wellness checks. Of all the visits that contained sexuality talk, physicians were able to maintain open and inclusive talk only 3 percent of the time.

“The physicians I know want to do a good job, so we’re trying to identify best practices, and hopefully these examples will provide them additional context for strengthening these conversations,” Shields says.

The study offers suggestions for inclusive conversation tactics, which have not been tested clinically:

  • Focus on attraction: “I know some teenagers who are attracted to girls. I know some teenagers who are attracted to boys, and I know some who are attracted to both. Have you started to think about these things?” or “Usually girls your age start to become interested in boys or other girls or both. Have you started to become interested in others?”
  • Ask about friends: “Have any of your friends started dating? Any boyfriends or girlfriends or both?” or “Do you know if your friends started to have sex yet?” Physicians can use this approach to then turn to the teenager’s dating and sexual behavior by always suggesting gender-neutral terms such as “anybody,” “someone,” or “partners.”

While maintaining an inclusive conversation can be challenging at first, it allows doctors to reinforce the notion of multiple attractions and identities and emphasize non-judgment, the researchers say.

Leave the door open

Another technique to maintain inclusive conversations is leaving the door open for future conversations, such as, “If things change, or if along the way you decide something else is right for you, I want you to let me know.”

The idea of setting the tone for the years to come is very important,” Alexander says. “This may not be the big conversation for the 12-year-olds—that may take place in four years—but the tone needs to be set at age 12 so that when the time comes the child is comfortable and knows the doctor is a safe contact.

This approach also reinforces the adolescent as an emerging adult. We want to provide them with autonomy so they can be a consumer of their own health.”

The National Heart, Lung, and Blood Institute funded the research. Researchers from Indiana University School of Medicine, Duke University, and University of Michigan collaborated on the study.

Source: Purdue University

The post How doctors can get LGBT teens to open up appeared first on Futurity.

‘Tags’ let robot find things around the house

Wed, 09/24/2014 - 13:42

Scientists have equipped a PR2 robot with articulated, directionally sensitive antennas and a new algorithm that lets it find and go get tagged household objects like a medication bottle, TV remote, phone, and hair brush.

Today’s robots typically see the world with cameras and lasers, but they have a hard time reliably recognizing things and can miss object that are hidden in clutter.

Related Articles On Futurity

Mobile robots could be much more useful in homes, if they could locate people, places, and objects.

A complementary way robots can “sense” what is around them is through the use of small ultra-high frequency radio-frequency identification (UHF RFID) tags.

Inexpensive self-adhesive tags can be stuck on objects, allowing an RFID-equipped robot to search a room for the correct tag’s signal, even when the object is hidden out of sight. Once the tag is detected, the robot knows the object it’s trying to find isn’t far away.

“But RFID doesn’t tell the robot where it is,” says Charlie Kemp, an associate professor in Georgia Tech’s Wallace H. Coulter Department of Biomedical Engineering. “To actually find the object and get close to it, the robot has to be more clever.”

That’s why Kemp, former Georgia Tech student Travis Deyle, and University of Washington Professor Matthew Reynolds developed a new search algorithm that improves a robot’s ability to find and navigate to tagged objects.

They presented the research September 14-18 in Chicago at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

You’re getting hotter…

Due to the physics of radio-frequency propagation, the robot’s new antennas tend to receive stronger signals from a tag when they are closer to it and pointed more directly at it. By moving around the antennas on its shoulders and driving around the room, the PR2 can figure out the direction it should move to get a stronger signal from a tag and thus become closer to a tagged object.

In essence, the robot plays the classic childhood game of “Hotter/Colder” with the tag telling the PR2 when it’s getting closer to the target object.

In contrast to other approaches, the robot doesn’t explicitly estimate the 3D location of the target object, which significantly reduces the complexity of the algorithm.

“Instead the robot can use its mobility and our special behaviors to get close to a tag and oriented toward it,” says Deyle, who conducted the study in Kemp’s lab while earning his doctoral degree in electrical and computer engineering.

‘Robot, fetch my pills’

Deyle, who currently works at Google, says the research has implications for future home robots and is particularly compelling for applications such as helping people with medicine, as RFID is able to provide precise identification information about an object or a person.

“This could allow a robot to search for, grasp, and deliver the right medication to the right person at the right time,” he adds.

“RFID provides precise identification, so the risk of delivering the wrong medication is dramatically reduced. Creating a system that allows robots to accurately locate the correct tag is an important first step.”

Reynolds adds, “While we have demonstrated this technology with a few common household objects, the RFID tags can uniquely identify billions of different objects with essentially zero false positives.

This is important because many objects look alike, yet must be uniquely identified—for example, identifying the correct medication bottle that should be delivered to a specific person.”

“With a little modification of the objects in your home, a robot could quickly take inventory of your possessions and navigate to an object of your choosing,” says Kemp, a professor in Georgia Tech’s School of Biomedical Engineering. “Are you looking for something? The robot will show you where it is.”

The National Science Foundation and Willow Garage provided partial funding for the work. Any conclusions or opinions are those of the authors and do not necessarily represent the official views of the sponsoring agency.

Source: Georgia Tech

The post ‘Tags’ let robot find things around the house appeared first on Futurity.

Do your genes skew how you taste alcohol?

Wed, 09/24/2014 - 12:38

A new study shows that how people perceive and taste alcohol varies as a result of genetics. The scientists focused on three chemosensory genes—two bitter-taste receptor genes known as TAS2R13 and TAS2R38 and a burn receptor gene, TRPV1.

The research is the first to consider whether variation in the burn receptor gene might influence alcohol sensations, which has not previously been linked to alcohol consumption.

Related Articles On Futurity

People may differ in the sensations they experience from a food or beverage, and these perceptual differences have a biological basis, explains John Hayes, assistant professor of food science and director of Penn State’s Sensory Evaluation Center.

He notes that prior work done in his laboratory has shown that some people experience more bitterness and less sweetness from an alcoholic beverage, such as beer.

“In general, greater bitterness relates to lower liking, and because we generally tend to avoid eating or drinking things we don’t like, lower liking for alcoholic beverages associates with lower intake,” he says.

“The burn receptor gene TRPV1 has not previously been linked to differences in intake, but we reasoned that this gene might be important as alcohol causes burning sensations in addition to bitterness.

“In our research, we show that when people taste alcohol in the laboratory, the amount of bitterness they experience differs, and these differences are related to which version of a bitter receptor gene the individual has.”

Tasting sessions

To determine which variant of the receptor genes study participants possess, DNA was collected via saliva samples for genetic analysis.

The results will appear in Alcoholism: Clinical and Experimental Research. One hundred thirty people of various races, age 18 to 45, completed all four of the study’s tasting sessions.

People are hard-wired by evolution to like sweetness and dislike bitterness, and this influences the food and beverage choices we make every day, points out lead researcher Alissa Allen, a doctoral candidate in food science advised by Hayes.

Allen adds that it is also well established that individuals differ in the amount of bitterness they perceive from some foods or beverages, and this variation can be attributed to genetic differences.

Normally, sweet and bitter sensations suppress each other, so in foods and beverages, genetic differences in bitter perception can also influence perceived sweetness.

“Prior work suggests greater bitterness and less sweetness each influence the liking of alcohol beverages, which influences intake,” Allen says.

“Here we show that the bitterness of sampled ethanol varies with genetic differences in bitter taste receptor genes, which suggests a likely mechanism to explain previously reported relationships between these gene variants and alcohol intake.”

But do people like the burn?

The researchers concede that the relationship between burn and intake is more complicated, at least for foods, as personality traits also play a role. Some people enjoy the burn of chili peppers, for example.

“Still, anecdote suggests that many individuals find the burn of ethanol aversive,” Hayes says. “Accordingly, greater burn would presumably reduce liking and thus intake, although this needs to be confirmed.”

Allen and Hayes’ study only used ethanol cut with water, so it is unclear how the results apply to alcoholic beverages because almost all contain other sensory-active compounds that may enhance or suppress bitterness.

For example, the sugar in flavored malt beverages will presumably reduce or eliminate the bitterness of ethanol while the addition of hops to beer will add bitterness that may be perceived through other receptors.

Hayes suggests that chemosensory variation probably plays little or no role in predicting alcohol intake once an individual is dependent. However, he says that genetic variation in chemosensation may be underappreciated as a risk factor when an individual is initially exposed to alcohol, and is still learning to consume alcohol.

Prior studies by Hayes’ laboratory group and others have repeatedly associated bitter receptor gene variants with alcohol intake, a relationship that was presumably mediated via perceptual differences and thus differential liking.

Data from this study begin to fill in the gaps in this chain by showing the sensations evoked by ethanol differ across people as a function of genetic variation.

“Additional work is needed to see if these variants can prospectively predict alcohol use behaviors in naïve individuals,” he says. “But biology is not destiny. That is, food choice remains that, a choice.

“Some individuals may learn to overcome their innate aversions to bitterness and consume excessive amounts of alcohol, while others who do not experience heightened bitterness may still choose not to consume alcohol for many reasons unrelated to taste.”

The National Institutes of Health supported this research.

Source: Penn State

The post Do your genes skew how you taste alcohol? appeared first on Futurity.

Should states mandate nurse-to-patient ratios?

Wed, 09/24/2014 - 09:13

Standards that require higher nurse-to-patient ratios in acute care hospitals significantly lower job-related injuries and illnesses for both registered nurses and licensed practical nurses, a new study reports.

In 2004, California became the only state in the country with mandated minimum nurse-to-patient staffing ratios based on type of service (such as pediatrics, surgery, or labor and delivery) and allow for flexibility in cases of health-care emergencies.

Related Articles On Futurity

“We were surprised to discover such a large reduction in injuries as a result of the California law,” says study lead author J. Paul Leigh, professor of public health sciences and investigator with the Center for Healthcare Policy and Research at University of California, Davis.

“These findings should contribute to the national debate about enacting similar laws in other states.”

Some hospitals have argued against extending the law to other states because of the increased costs of additional nursing staff. There is also no consensus that the law has improved patient outcomes, which was its primary intent. Some studies show improvement, while others do not.

“Our study links the ratios to something just as important—the lower workers’ compensation costs, improved job satisfaction, and increased safety that comes with linking essential nursing staff levels to patient volumes,” Leigh says.

Fewer needle-sticks

Published online in the International Archives of Occupational and Environmental Health, the study uses data from the US Bureau of Labor Statistics. Researchers compared occupational illness and injury rates for nurses during several years before and after implementation of the new law. They also compared injury and illness rates in California to rates for all other states combined.

This approach—known as the “difference-in-differences” method—helped account for a nationwide downward trend in workplace injuries and separate the effects of California’s staffing mandates attributable to the new law.

For California, they estimated that the law resulted in an average yearly change from 176 injuries and illnesses per 10,000 registered nurses to 120 per 10,000, representing a 32 percent reduction. For licensed practical nurses, a position that involves less scope of practice than registered nurses, the average yearly change went from 244 injuries per 10,000 to 161 per 10,000, representing a 34 percent reduction.

The lower rates of injuries and illnesses to nurses could come about in a number of ways as a result of improved staffing ratios. Back and shoulder injuries could be prevented, for instance, if more nurses are available to help with repositioning patients in bed. Likewise, fewer needle-stick injuries may occur if nurses conduct blood draws and other procedures in a less time-pressured manner.

Halo effect?

Additional research with more recent data is recommended to see if the reductions in injury and illness rates held up over time, Leigh says.

“Even if the improvement was a temporary ‘halo’ effect of the new law, it is important to consider our results in debates about enacting similar laws in other states. Nurses are the most recognizable faces of health care. Making their jobs safer should be a priority.”

The National Institute for Occupational Safety and Health and the California Department of Public Health supported the research.

Source: UC Davis

The post Should states mandate nurse-to-patient ratios? appeared first on Futurity.

Plants can’t run from stress, but they can adapt

Wed, 09/24/2014 - 08:39

Scientists have discovered a key molecular cog in a plant’s biological clock. In response to temperature, it controls the speed of circadian, or daily, rhythms.

Transcription factors, known as genetic switches, drive gene expression in plants based on external stresses such as light, rain, soil quality, or even animals grazing on them.

A team of researchers has isolated one genetic switch, called FBH1, that reacts to temperature, tweaking the rhythm here and there as needed while keeping it on a consistent track.

Related Articles On Futurity

“Temperature helps keep the hands of the biological clock in the right place,” says Steve A. Kay, dean of the USC Dornsife College of Letters, Arts and Sciences and the corresponding author of the study. “Now we know more about how that works.”

Kay worked with lead author Dawn Nagel, a postdoctoral researcher, and coauthor Jose Pruneda-Paz, an assistant professor at the University of California, San Diego, on the study, which is published in the Proceedings of the National Academy of Sciences.

Understanding the mechanics of how the interactions between the biological clock and the transcriptional network work could allow scientists to breed plants that are better able to deal with stressful environments—crucial in a world where farmers attempt to feed an increasing population amid urban development of arable land and a rising global temperature.

“Global climate change suggests that it’s going to get warmer and since plants cannot run away from the heat, they’re going to have to adapt to a changing environment,” Nagel says.

“This study suggests one mechanism for us to understand how this interaction works.”

How plants deal with stress

Both plants and animals have transcription factors, but plants have on average six times as many, likely because they lack the ability to get up and walk away from any of their stressors.

“Plants have to be exquisitely tuned to their environment,” Kay says. “They have evolved mechanisms to deal with things that we take for granted. Even light can be a stressor, if you are rooted to one location.”

Among other things, Kay’s research explores how these transcription factors affect plants’ circadian rhythms, which set the pace and schedule for how plants grow.

Kay and his team conducted their research on Arabidopsis, a flowering member of the mustard family that is used as a model organism by scientists because of its high-seed production, short life cycle, and the fact that now all of its genome has been sequenced.

The Ruth L. Kirschstein National Research Service Award and the National Institutes of Health, National Institute of General Medical supported the work.

Source: USC

The post Plants can’t run from stress, but they can adapt appeared first on Futurity.

Manly faces aren’t first pick in all cultures

Wed, 09/24/2014 - 08:28

A new study could debunk the theory that women living where rates of infectious disease are high prefer men with faces that shout testosterone when choosing a mate.

By the end of the study, that theory crumbled amid patterns too subtle to detect when tested with 962 adults drawn from 12 populations living in various economic systems in 10 nations.

Related Articles On Futurity

“It’s not the case that women have a universal preference for high testosterone faces, and it’s not the case that such a preference is greater in a high-pathogen environment,” says coauthor Lawrence S. Sugiyama, an anthropologist at the University of Oregon.

“And the opposite is also the case. Men don’t uniformly appear to have a preference for more feminine faces, at least within the ranges of cultures shown in this study. In cultures tied to pastoralism, agriculture, foraging, fishing, and horticulture, not so much, the authors conclude.

The closest the theory came to confirmation was in market economies in the study populations in the UK, Canada, and China, perhaps because, as Sugiyama’s prior work has shown, preferences in such economies shift in response to the local range of variation in traits, and where men have higher testosterone.

“In large-scale societies like ours we encounter many unfamiliar people, so using appearance to infer personality traits can help cope with the overwhelming amount of social information,” says Sugiyama.

“For instance, in all cultures tested, high testosterone faces were judged to be more aggressive, and this is useful information when encountering strangers.”

Tired of warfare?

Sugiyama and coauthors at University of Oregon contributed to the study based on their work with the Shuar, a rural population with a long history of warfare in Ecuador and whose mixed economy today is based on horticulture, hunting, foraging, and small-scale agro-pastoralism.

The Shuar did not come into contact with the outside world until the 1880s, and only since the 1960s have they organized into communities, Sugiyama says. Research there is looking at the impacts of culture change on Shuar health. Data for the PNAS study came from routine sessions with 30 men and 31 women.

Each was shown culturally appropriate facial representations of potential opposite-sex mates and asked which one they’d prefer. Shuar women didn’t like the faces of men whose faces suggested high testosterone levels.

“Shuar women preferred slightly less testosterone-looking faces,” Sugiyama says. The reason why was not clear, but he suggests that maybe Shuar women possibly have grown weary of years of warfare and would prefer mates who would be less likely to participate and encourage their offspring to engage in violent behaviors.

‘Stop and rethink’

The paper’s other researchers contributed with data collected from the populations that they study. The study encompassed students and Cree populations in Canada, students and urban residents in two Chinese cities, the Tuvans in Russia, students in the United Kingdom, the Kadazan-Dusun in Malaysia, villagers in Fiji, the Miskitu in Nicaragua, the Tchimba in Namibia, and the Aka in the Central African Republic.

“Performance by the different populations wasn’t chance,” Sugiyama says. “For each society there was a pattern. There were significant preferences in each culture. Market economies do play a part, but something more was going on.

“I think the real message of this study is that we in this field need to stop and rethink how we have been thinking about these things,” he says. “Maybe the idea of infectious disease—the presence of pathogens—isn’t the main driving factor.

“The underlying adaptations are likely to track other ecological considerations and local cultural factors that we don’t have data on and may eventually be very important in understanding attractiveness.”

Coauthors of the study contributed from universities in the UK, United States, Canada, and China.

The UK-based Leverhulme Trust supported the research. The study appears early online in the Proceedings of the National Academy of Sciences.

Source: University of Oregon

The post Manly faces aren’t first pick in all cultures appeared first on Futurity.

Heart rate varies to keep body in balance

Wed, 09/24/2014 - 06:55

Although the heart beats out a very familiar “lub-dub” pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn’t as regular as you might think. In fact, even a “constant” heart rate varies a bit between beats. Now, doctors have found that variability is a good thing.

Reduced heart rate variability (HRV) is predictive of a number of illnesses, such as congestive heart failure and inflammation.

For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, researchers have devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

Give-and-take connections

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables. If one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability.

Related Articles On Futurity

Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis.

When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says John Doyle, professor of control and dynamical systems, electrical engineering, and bioengineering at California Institute of Technology (Caltech).

Body on ‘cruise control’

“A familiar related problem is in driving,” Doyle says. “To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or ‘crash,’ often before there are any other indications,” he says.

To study how HRV helps maintain this version of “cruise control” in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

“For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain,” says Doyle, author of the study in the Proceedings of the National Academy of Sciences.

“This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges.”

Control theory

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance. Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle’s students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

“Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge,” says Marie Csete, chief scientific officer at the Huntington Medical Research Institutes and a coauthor of the paper. “We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients.”

Monitoring cancer’s progression

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

“We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression,” he says.

“Based on discussions with Dr. Peter Lee at City of Hope (a cancer research and treatment center), we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression,” Doyle says. “And I’m hoping there’s a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies.”

Additional researchers from Caltech and from NYU, UC Berkeley, University of Virginia, and Harvard are coauthors of the study.

Source: Caltech

The post Heart rate varies to keep body in balance appeared first on Futurity.


« Back