SULAIR Home

Futurity.org

Syndicate content
Research news from top universities.
Updated: 7 min 21 sec ago

Giant tortoises have a sweet tooth for invasive plants

Tue, 04/07/2015 - 08:13

Invasive plants and animals are almost universally lambasted for what they do to ecosystems, but giant tortoises in the Galapagos Islands might have a different opinion.

New research shows the iconic endangered animal thrives on a diet heavy on non-native plants. In fact, they seem to prefer these plants to native ones.

Introduced plants began to increase in abundance on the Galapagos Islands in the 1930s as native highland vegetation was cleared for agriculture, and the rate of introductions has been increasing ever since.

Fit and feisty

The giant tortoises, for their part, seem headed in the opposite direction. Until the late Pleistocene epoch, they were found on all the continents except Antarctica. Today they survive in only two locations: the Aldabra Atoll in the Indian Ocean, and the Galapagos Archipelago in the eastern Pacific Ocean. In the Galapagos, all of the remaining subspecies are considered vulnerable or endangered.

In a surprising turn of events, field research in the Galapagos shows that introduced plants make up roughly half the diet of two subspecies of endangered tortoise. What’s more, these plants seem to benefit the tortoises nutritionally, helping them stay fit and feisty.

“Biodiversity conservation is a huge problem confronting managers on the Galapagos Islands,” says Stephen Blake, an honorary research scientist at Washington University in St. Louis.

“Eradicating the more than 750 species of invasive plants is all but impossible, and even control is difficult. Fortunately, tortoise conservation seems to be compatible with the presence of some introduced species.”

Tagged tortoises

Published in the journal Biotropica, the study took place on the island of Santa Cruz, an extinct volcano that is home to two species of giant tortoise, but also to the largest human population in the Galapagos. Farmers have converted most of the highland moist zones to agriculture and at least 86 percent of the highlands and other moist zones are now degraded by either agriculture or invasive species.

In earlier work, Blake had fitted adult tortoises on Santa Cruz with GPS tags and discovered that they migrate seasonally between the arid lowlands, which “green up” with vegetation only in the wet season, to the meadows of the highlands, which remain lush year-round.

“This struck us as pretty odd, ” he says, “since a large Galapagos tortoise can survive for a year without eating and drinking. This is why sailors would collect the tortoises to serve as a source of fresh meat aboard ship.

“Why would a 500-pound animal that can fast for a year and that carries a heavy shell haul itself up and down a volcano in search of food? Couldn’t it just wait out the dry season until better times came with the rains?”

The answer, depends on the tortoise’s energy balance. But the only detailed study of tortoise foraging the scientists were aware of had been completed in 1980, “largely before the explosion of introduced and invasive species hit the Galapagos,” Blake says.

Bites and bouts

Over a period of four years, the scientists followed tortoises in the field and during 10-minute “focal observations” recorded every bite the tortoises took—including both the plant species and which part they ate. As an additional measure of the fruits the tortoises were eating, they also counted and identified seeds (sometimes more than 1,000) in tortoise dung piles.

Counts of bites and bouts (defined as all feeding on a given species during the focal observations) showed that tortoises actually spent more time browsing on introduced species than on native ones.

“We weren’t really that surprised,” Blake says. “Consider it from a tortoise’s point of view. The native guava, for example, produces small fruits containing large seeds and a small amount of relatively bitter pulp in a thick skin. The introduced guava is large and contains abundant sweet pulp in a thin, pliable skin.”

The researchers also assessed the tortoises’ health and nutritional status, weighing them by suspending them from a spring balance and taking blood samples.

All of the indicators suggest that introduced species in the diet have either a neutral or positive effect on the physical condition of the tortoises. Introduced species may even help tortoises to improve their condition during the dry season.

Fredy Cabrera of the Charles Darwin Foundation in the Galapagos and Sharon Deem, a wildlife veterinarian and epidemiologist at the St. Louis Zoo, contributed to the work.

Source: Washington University in St. Louis

The post Giant tortoises have a sweet tooth for invasive plants appeared first on Futurity.

Sea sponge stays put thanks to glass ‘hair’

Tue, 04/07/2015 - 06:39

Hair-like appendages are all that hold the Venus’ flower basket sea sponge to its seafloor home. But the interiors of those tiny lifelines, made essentially of glass, are fine-tuned for strength.

The findings could eventually help engineers build load-bearing structures made stronger from the inside out, researchers say.

The secret to those tiny lifelines, called basalia spicules, lies in their internal structure. The spicules, each only 50 microns in diameter, are made of a silica (glass) core surrounded by 10 to 50 concentric cylinders of glass, each separated by an ultra-thin layer of an organic material. The walls of each cylinder gradually decrease in thickness moving from the core toward the outside edge of the spicule.

Stuck to the seafloor

When researchers first saw this structure, they weren’t sure what to make of it. But the pattern of decreasing thickness caught their attention.

“It was not at all clear to me what this pattern was for, but it looked like a figure from a math book,” says Haneesh Kesari, assistant professor of engineering at Brown University. “It had such mathematical regularity to it that I thought it had to be for something useful and important to the animal.”

The lives of these sponges depend on their ability to stay fixed to the sea floor. They sustain themselves by filtering nutrients out of the water, which they can’t do if they’re being cast about with the flow. So it would make sense that natural selection may have molded the creatures’ spicule anchors into models of strength—and the thickness pattern could be a contributing factor.

“If it can’t anchor, it can’t survive,” Kesari says. “So we thought this internal structure must be contributing to these spicules being a better anchor.”

Math model

To find out, Kesari worked with graduate student Michael Monn to build a mathematical model of the spicules’ structure. Among the model’s assumptions was that the organic layers between the glass cylinders allowed the cylinders to slide against each other.

“We prepared a mechanical model of this system and asked the question: Of all possible ways the thicknesses of the layers can vary, how should they vary so that the spicule’s anchoring ability is maximized?” Kesari says.

The model predicted that the structure’s load capacity would be greatest when the layers decrease in thickness toward the outside, just as was initially observed in actual spicules.

Kesari and Monn then worked with James Weaver and Joanna Aizenberg of Harvard University’s Wyss Institute for Biologically Inspired Engineering, who have worked with this sponge species for years. The team carefully compared the layer thicknesses predicted by the mechanics model to the actual layer thicknesses in more than a hundred spicule samples from sponges.

Better anchors

The findings, published in the Proceedings of the National Academy of Sciences, show the predictions made by the model matched very closely with the observed layer thicknesses in the samples. “It appears that the arrangement and thicknesses of these layers does indeed contribute to the spicules’ strength, which helps make them better anchors,” Kesari says.

The scientists say this is the first time to their knowledge that anyone has evaluated the mechanical advantage of this particular arrangement of layers. It could add to the list of useful engineered structures inspired by nature.

“In the engineered world, you see all kinds of instances where the external geometry of a structure is modified to enhance its specific strength—I-beams are one example,” Monn says. “But you don’t see a huge effort focused toward the internal mechanical design of these structures.”

This study, however, suggests that sponge spicules could provide a blueprint for load-bearing beams made stronger from the inside out.

The National Science Foundation and the KIMM–Brown Nano and Micromechanics for Disaster Mitigation and Technological Reliability project supported the work.

Source: Brown University

The post Sea sponge stays put thanks to glass ‘hair’ appeared first on Futurity.

Tiny camera chip creates hi-res 3D ‘map’

Tue, 04/07/2015 - 06:30

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3D imager, send it to your 3D printer, and within minutes you have reproduced a replica accurate to within microns of the original object.

This feat could one day be possible because of a new, tiny high-resolution 3D imager developed at California Institute of Technology (Caltech).

Any time you want to make an exact copy of an object with a 3D printer, the first step is to produce a high-resolution scan of the object with a 3D camera that measures its height, width, and depth.

Such 3D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.

A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3D imaging device.

The work, done in the laboratory of Ali Hajimiri, professor of electrical engineering in the Division of Engineering and Applied Science, appears in Optics Express.

Intensity and distance

In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera—meaning that the pixels provide no information about the relative distance of the object from the camera.

In contrast, each pixel in an image created by the team’s NCI provides both the distance and intensity information. “Each pixel on the chip is an independent interferometer—an instrument that uses the interference of light waves to make precise measurements—which detects the phase and frequency of the signal in addition to the intensity,” says Hajimiri.

The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object’s size and its distance from the laser to create an image of its surroundings.

“By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager,” Hajimiri says.

Coherent light

Such high-resolution images and information provided by the NCI are made possible because of an optical concept known as coherence. If two light waves are coherent, the waves have the same frequency, and the peaks and troughs of light waves are exactly aligned with one another. In the NCI, the object is illuminated with this coherent light.

The light that is reflected off of the object is then picked up by on-chip detectors, called grating couplers, which serve as “pixels,” as the light detected from each coupler represents one pixel on the 3D image. On the NCI chip, the phase, frequency, and intensity of the reflected light from different points on the object is detected and used to determine the exact distance of the target point.

Because the coherent light has a consistent frequency and wavelength, it is used as a reference with which to measure the differences in the reflected light. In this way, the NCI uses the coherent light as sort of a very precise ruler to measure the size of the object and the distance of each point on the object from the camera. The light is then converted into an electrical signal that contains intensity and distance information for each pixel—all of the information needed to create a 3D image.

The incorporation of coherent light not only allows 3D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size.

16 pixels, for now

“By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size—small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns,” Hajimiri says.

The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section.

With this method, the team used the device to scan and create a 3D image of the “hills and valleys” on the front face of a US penny—with micron-level resolution—from half a meter away.

In the future, Hajimiri says, that the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient’s eyes and the most minute changes in a patient’s heartbeat can be detected on the fly.

“The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands new of uses for such systems by incorporating them into personal devices such as smartphones,” he says.

The Caltech Innovation Initiative partially funded the work.

Source: Caltech

The post Tiny camera chip creates hi-res 3D ‘map’ appeared first on Futurity.

Is selling corn residue worth harming soil?

Tue, 04/07/2015 - 06:03

After the corn harvest, farmers still have plant material, called corn residue, left behind.

Farmers considering selling that corn residue to produce cellulosic ethanol should consider carefully, according to new research.

Mahdi Al-Kaisi, a professor of agronomy at Iowa State University, is urging farmers to consider variables such as topography, tillage system, nitrogen application, and the amount of organic matter present in their soil to determine how much corn residue they should part with.

“Residue removal has some real environmental impacts on soil health and water quality,” Al-Kaisi says. “It needs to be approached thoughtfully and on a site-specific condition basis.”

Is the ethanol worth it?

His most recent publication, published in the Soil Science Society of America Journal, shows how a decrease in crop residue can lead to increases in greenhouse gas emissions from the soil.

Al-Kaisi began the study in 2008. The continued development of cellulosic ethanol, a biofuel made from corn residue, motivates the research.

As the technology continues to mature, cellulosic ethanol production will require more and more feedstock. Al-Kaisi says research like his will help to balance the needs of the cellulosic ethanol industry with information that can help farmers safeguard their soil.

More emissions

The research team conducted experiments at two locations, one in central Iowa and one in southwest Iowa, and monitored the effects of removing portions of corn residue from the test plots on yield, soil organic matter, greenhouse gas emissions, and soil quality. The team also tested how various residue removal levels interact with different nitrogen rates and tillage systems such as no-till and conventional tillage.

The research found a general increase in the emissions of carbon dioxide and nitrous oxide from all residue removal plots as nitrogen application increased. They also found that conventional tillage led to an increase in carbon dioxide emissions regardless of the residue removal level.

As residue is removed, black surface soil is exposed to direct sunlight. The dark surface absorbs heat and results in an increase in the oxidation of organic matter and the release of carbon dioxide, Al-Kaisi says.

A previous paper published as a result of the study noted residue removal results in less organic matter in the soil, which Al-Kaisi says could lead to diminished productivity in the long term. Farmers should also exercise caution when considering the removal of residue from fields prone to erosion.

“Residue protects soil from wind and water erosion, so removing it should be based on field-specific conditions and potential soil erosion,” Al-Kaisi says. “Farmers who are going to remove corn residue should make sure they know how these different factors interact when making these decisions.”

Source: Iowa State University

The post Is selling corn residue worth harming soil? appeared first on Futurity.

Do drunk driving sanctions go far enough?

Tue, 04/07/2015 - 05:48

Punishments for drivers with blood alcohol levels above the legal limit have reduced the likelihood of repeat offenses. More severe sanctions and even lower BAC thresholds, however, could work even better, says economist Benjamin Hansen.

A new study, based on data from 1999 to 2011 in the state of Washington, where a BAC over .08 is considered to be driving under the influence, found a 17 percent reduction in recidivism.

An additional 9 percent reduction occurred in cases involving blood alcohol content above .15, at which point the violation is considered an aggravated offense.

Target dangerous drivers

BAC, the concentration of alcohol within a person’s bloodstream, refers to how much alcohol is present in 100 milliliters of blood.

At a glance, the study appears to support efforts by the National Transportation Safety Board to lower the minimum BAC threshold to .05.

However, the NTSB plan assumes an annual reduction of 500 to 800 fatalities and fails to target the most dangerous drunk drivers, says Hansen, assistant professor of economics at University of Oregon and author of a paper in the American Economic Review.

“If you look at fatalities involving BAC between .05 and .08 there are about 800 a year,” he says. “The NTSB has to be assuming that everyone who is now drunk driving at those levels is going to stop drinking and driving when the limit is lowered to .05.”

2x the legal limit

In reality, he says, most fatal accidents involve drivers with BACs ranging from .13 to .24. In Washington state, the average drunk driver involved in a fatality had BACs at least two times the legal limit.

“These drivers are the most costly. In terms of fatality risk, a person with a BAC of .15 is about 20-30 times more dangerous than a person who is sober,” Hansen says. “I think we might see even greater benefits if we were to increase the sanctions, or punishments, more steeply along the BAC distribution.”

In addition, lowering the threshold for aggravated BAC to .12, might make drivers “internalize the external costs of drunk driving.”

There were 585,136 deaths linked to drunk driving, from 1975 to 2012, according to the National Highway Traffic and Safety Administration. That compares to 725,347 murders in same period.

Source: University of Oregon

The post Do drunk driving sanctions go far enough? appeared first on Futurity.

Why biofuel algae should ‘eat’ wastewater

Tue, 04/07/2015 - 05:28

Scientists are among the first to investigate using city wastewater as a feedstock for algae-based biofuels. They report that they could easily grow high-value strains of oil-rich algae this way, while removing more than 90 percent of the nitrates and over 50 percent of phosphorus from the wastewater at the same time.

The findings, based on a five-month study at a wastewater treatment facility in Houston, are available online in the journal ALGAE.

“Biofuels were the hot topic in algaculture five years ago, but interest cooled as the algae industry moved toward producing higher-value, lower-volume products for pharmaceuticals, nutritional supplements, cosmetics, and other products,” says Meenakshi Bhattacharjee, who joined Rice University’s biosciences faculty in June.

“The move to high-value products has allowed the algaculture industry to become firmly established, but producers remain heavily dependent on chemical fertilizers. Moving forward, they must address sustainability if they are to progress toward producing higher-volume products, ‘green’ petrochemical substitutes and fuels.”

Bhattacharjee says the algae industry’s reliance on chemical fertilizers is a double whammy for algae producers because it both reduces profit margins and puts them in competition with food producers for fertilizers.

Two problems with fertilizer

A 2012 National Research Council report found that “with current technologies, scaling up production of algal biofuels to meet even 5 percent of US transportation fuel needs could create unsustainable demands for energy, water, and nutrient resources.”

The 2012 report also pointed to wastewater-based cultivation as a potential way to make algae production sustainable. An added appeal is that the method could potentially address a looming environmental problem: nutrient pollution in US waterways.

According to the Environmental Protection Agency, nutrient pollution from excess nitrogen and phosphorus—the two primary components of chemical fertilizers—is “one of America’s most widespread, costly, and challenging environmental problems.”

Wastewater treatment facilities currently have no cost-effective way of removing large volumes of nitrates or phosphorus from treated water, so algae production with wastewater has the potential of solving two problems at once, says study coauthor Evan Siemann, professor of biosciences.

“The idea has been on the books for quite a while, but there are questions, including whether it can be done in open tanks and whether it will be adaptable for monoculture—a preferred process where producers grow one algal strain that’s optimized to yield particular products,” he says.

“We were surprised at how little had been done to test these questions. There are a number of laboratory studies, but we found only one previous large-scale study, which was conducted at a wastewater facility in Kansas.”

Fish or no fish?

Siemann says the new study was made possible by the participation of the Houston Department of Public Works and Engineering, which helped Rice’s research team set up a test involving 12 open-topped 600-gallon tanks at one of the city’s satellite wastewater treatment plants in July 2013.

They fed the tanks with filtered wastewater from the plant’s clarifiers, which remove suspended solids from sewage. Various formulations of algae were tested in each tank. Some were monocultures of oil-rich algal strains and others contained mixed cultures, including some with local algal strains from Houston bayous. Some tanks contained fish that preyed upon algae-eating zooplankton.

“Prior research had suggested that diverse assemblages of algal species might perform better in open tanks and that fish might keep algae-eating zooplankton from adversely affecting yields,” Siemann says.

“We recorded prolific algal growth in all 12 tanks,” he says. “Our results are likely to be very encouraging to algae producers because the case they would prefer—monocultures with no fish and no cross-contamination—was the case where we saw optimal performance.”

Warmer weather

Bhattacharjee says more research is needed to determine whether wastewater-based algaculture will be cost-effective and under what circumstances.

For instance, the algae in the new study was four times more effective at removing phosphorus than were the algae in the Kansas study. She says that could be because the Houston test was performed in summer and fall, and the tanks were about 30 degrees warmer on average than the tanks in Kansas.

“Using wastewater would be one of the best solutions to make algaculture sustainable,” she says. “If temperature is key, then cultivation may be more economical in the Southeast and Southwest.” She notes that other factors, like starting levels of nitrogen and phosphorus, might have caused a rate-limiting effect.

“These are the kinds of questions future studies would need to address to optimize this process and make it more attractive for investors,” she says.

Source: Rice University

The post Why biofuel algae should ‘eat’ wastewater appeared first on Futurity.

Sexually assaulted teens at greater risk of suicide

Mon, 04/06/2015 - 12:24

Teenagers who have been victims of sexual assault are at greater risk of attempting suicide.

Girls are not the only ones who need support. A new study shows that one in three teenage boys who have been victims of sexual assault has attempted suicide.

“The stigma is often not addressed; it’s a silent issue in society,” says Laura Anderson, a licensed psychologist and assistant professor in the University at Buffalo School of Nursing.

Not for girls only

“Very rarely does programming address boys. It’s often presumed to be an issue for girls. The results highlight the need to educate the public and develop preventive programming and support for male and female sexual assault survivors.”

Suicide is the third leading cause of death among adolescents. The greatest indicator of whether an attempt will be successful is the number of times someone tries to take his or her life.

The study, which shows that a history of sexual assault and unhealthy weight place girls at higher risk of attempted suicide, stems from an observation in Anderson’s clinical practice over the years with children and teens: She noticed that teens who attempted suicide tended to share the same histories of sexual assault and struggles with weight.

Published in the journal Suicide and Life Threatening Behavior, the study analyzes data from a Youth and Risk Behavior Survey that sampled more than 31,000 teenagers in 2009 and 2011. The research continues a preliminary study from 2011 that found similar results using a smaller sample of teens.

The poll surveyed students ages 14 to 18 and examined whether sexual assault and struggles with weight influenced suicide attempts within a year of the survey.

Stigma and shame

For boys, highlights of the study include:

  • 3.5 percent of healthy-weight males with no sexual assault history attempted suicide.
  • 33.2 percent of healthy-weight males with sexual assault history attempted suicide. This can be attributed to stigma, shame, possible gender role conflict if the attacker was male, and the lack of an open support system.
  • Weight alone is not a significant factor in suicide attempts for males. Only 3.9 percent of overweight males with no sexual assault history attempted suicide.
  • 33 percent of males who were both overweight and had a history of sexual assault attempted suicide.
Weight influences girls’ risk

For girls significant findings include:

  • 5.8 percent of healthy-weight females with no sexual assault history attempted suicide.
  • 27.1 percent of healthy-weight girls with a history of sexual assault attempted suicide.
  • Weight influenced the suicide rate among women: 8.2 percent of overweight girls with no sexual assault history attempt suicide.
  • Both factors did not increase suicide rate: 26.6 percent of overweight girls with sexual assault histories attempted suicide.

Despite the large sample, the results are culturally loaded, as nearly 20 percent of students of color left questions surrounding suicide unanswered. Underreporting is common, especially among males and African American students, Anderson says.

Future studies will gather more detailed responses on sexual assault and suicide attempts, and examine additional variables, such as body mass index and perceived self-image. the relationship among weight, sexual assault, and suicide—especially in girls—is complicated and needs additional study.

Source: University at Buffalo

The post Sexually assaulted teens at greater risk of suicide appeared first on Futurity.

Video system captures vital signs from faces

Mon, 04/06/2015 - 11:18

Inspired by premature babies, a new video system to monitor vital signs can handle low lighting, diverse skin tones, and movement.

The technique isn’t new, but engineering researchers in Rice University’s Scalable Health Initiative are making it work under conditions that had stumped earlier systems.

The new version, DistancePPG, can measure a patient’s pulse and breathing just by analyzing the changes in skin color over time. Where other camera-based systems have been challenged by low-light conditions, dark skin tones, and movement, DistancePPG relies on algorithms that correct for those variables.

The team, graduate student Mayank Kumar and professors Ashok Veeraraghavan and Ashutosh Sabharwal, created the system that will let doctors diagnose patients from a distance with special attention paid to those in low-resource settings.

3 big challenges

Kumar and his colleagues were aware of an emerging technique that used a video camera to detect nearly imperceptible changes in a person’s skin color due to changes in blood volume underneath the skin. Pulse and breathing rates can be determined from these small changes.

That worked just fine for monitoring white people in bright rooms, he says. But there were three challenges. The first was the technique’s difficulty in detecting color change in darker skin tones. Second, the light was not always bright enough. The third and perhaps hardest problem was that patients sometimes move.

The team solved these challenges by adding a method to average skin-color change signals from different areas of the face and an algorithm to track a subject’s nose, eyes, mouth, and whole face.

“Our key finding was that the strength of the skin-color change signal is different in different regions of the face, so we developed a weighted-averaging algorithm,” Kumar says. “It improved the accuracy of derived vital signs, rapidly expanding the scope, viability, reach, and utility of camera-based vital-sign monitoring.”

By incorporating tracking to compensate for movement—even a smile—DistancePPG perceived a pulse rate to within one beat per minute, even for diverse skin tones under varied lighting conditions.

Premature babies

Kumar, the project’s lead graduate researcher, says DistancePPG will be particularly helpful to monitor premature infants for whom blood pressure cuffs or wired probes can pose a threat. In fact, they were his inspiration.

“This story began in 2013 when we visited Texas Children’s Hospital to talk to doctors and get ideas,” Kumar says. “That was when we saw the newborn babies in the neonatal ICU. We saw multiple wires attached to them and asked, ‘Why?'”

The wires monitored the babies’ pulses, heart rate, “and this and that,” he recalls. “And the wires weren’t a problem. The problem was that the babies would roll, or their mothers needed to take care of them, and the wires would be taken off and put back on.” That, Kumar says, could potentially damage the infants’ delicate skin.

Kumar says he expects the software to find its way to mobile phones, tablets, and computers so people can reliably measure their own vital signs whenever and wherever they choose.

Veeraraghavan is an assistant professor of electrical and computer engineering. Sabharwal is a professor of electrical and computer engineering. The lab’s research appears in the journal Biomedical Optics Express.

The National Science Foundation, the Texas Instruments Fellowship, the Texas Higher Education Coordinating Board, and a Rice University Graduate Fellowship supported the research.

Source: Rice University

The post Video system captures vital signs from faces appeared first on Futurity.

Me, me, me: Not all ‘I-talkers’ are narcissistic

Mon, 04/06/2015 - 10:56

Not all people who use a lot of first-person singular pronouns like “I” and “me” in normal conversation are narcissistic or have an inflated sense of their own importance, a new study suggests.

Narcissists have an unrealistic sense of superiority and self-importance and an overabundance of self-focus, so it might be reasonable to assume that narcissists would be more prone to this kind of language, says study coauthor Matthias Mehl, professor of psychology at University of Arizona.

“There is a widely assumed association between use of first-person singular pronouns—what we call ‘I-talk’—and narcissism, among laypeople and scientists, despite the fact that the empirical support for this relation is surprisingly sparse and generally inconsistent,” says Angela Carey, a doctoral candidate in psychology and lead author of the study in the Journal of Personality and Social Psychology.

Early testing of this hypothesis was conducted at the University of California, Berkeley, in 1988 and confirmed the association, but it consisted of only 48 participants.

Since then, scientific studies have been unable to consistently replicate the finding. Because it appears to be such a pervasive belief in modern society, the researchers felt it was important to give the hypothesis a rigorous scientific vetting.

5 narcissism measures

For the study, researchers recruited more than 4,800 people in Germany and the United States for the study (67 percent were female, mostly undergraduate students). Participants were asked to engage in one of six communications tasks in which they wrote or talked about themselves or an unrelated topic.

Researchers scored the participants for narcissism using five different narcissism measures, including the common 40-item Narcissistic Personality Inventory. Their narcissism score was then compared with their use of first-person singular pronouns in the communication tasks.

The findings showed no association between pronoun use and narcissism. Men had a slightly higher correlation than women, but neither was statistically significant nor practically meaningful.

“The most interesting finding is that the results did not vary much across two different countries, multiple labs, five different narcissism measures, and 12 different samples,” Mehl says. “We were surprised by how consistent of a near-null finding it was.”

Identifying narcissists is important, because over time their grandiosity, self-focus, and self-importance can become socially toxic and can have negative consequences on relationships, Carey says.

“The next question, of course, is how else, if not through I-talk, narcissism is revealed through language,” she says. “We are working on this question in a follow-up study using the same data.”

Source: University of Arizona

The post Me, me, me: Not all ‘I-talkers’ are narcissistic appeared first on Futurity.

Think you’re so smart? It might just be Google

Mon, 04/06/2015 - 08:52

All the information available online has a strange effect on our brains: We feel a lot smarter than we really are, a new study shows.

In 9 different experiments with more than 1,000 participants, Yale University psychologists found that if subjects received information through internet searches, they rated their knowledge base as much greater than those who obtained the information through other methods.

“This was a very robust effect, replicated time and time again,” says Matthew Fisher, a PhD student and the lead author of the study. “People who search for information tend to conflate accessible knowledge with their own personal knowledge.”

For instance, in one experiment people searched online for a website that answers the question, “How does a zipper work?” The control group received the same answer that they would have found online, but without searching for it themselves.

When later asked how well they understood completely unrelated domains of knowledge, those who searched online rated their knowledge substantially greater than those who were only provided text. Prior to the experiment, no such difference existed.

The effect was so strong that even when a full answer to a question was not provided to internet searchers, they still had an inflated sense of their own knowledge.

“The cognitive effects of ‘being in search mode’ on the internet may be so powerful that people still feel smarter even when their online searches reveal nothing,” says Frank Keil, professor of psychology and linguistics and senior author of the paper.

Keil recalls being cut off from internet access during a hurricane, and says, “I felt myself becoming stupider by the hour.” For younger people, the effect may be even more pronounced. “The cell phone is almost like the appendage of their brain,” he says. “They don’t even realize it’s not real until they become unplugged.”

The findings appear in the Journal of Experimental Psychology.

The research was made possible by a grant from the Fuller Theological Seminary/Thrive Center for Human Development in concert with the John Templeton Foundation.

Source: Yale University

The post Think you’re so smart? It might just be Google appeared first on Futurity.

Older mouse mom exercise cuts baby’s heart risk

Mon, 04/06/2015 - 08:24

In people, the older the mother, the greater the risk of congenital heart defects for her baby. Newborn mice predisposed to heart defects because of genetic mutations show the same age association.

But a new study, published in the journal Nature, demonstrates that older mouse mothers reduce this risk for their offspring to that of younger mouse mothers through exercise alone. The study also suggests that the increased risk of congenital heart defects is tied to the age of the mother and not the age of her eggs.

The risk that an infant human or mouse may develop a congenital heart defect results from a complex interplay of genes inherited from both parents and environmental factors experienced by the embryo. Genetic mutations are known to increase a child’s risk of developing a heart that has abnormally formed valves, vessels, or chambers, or holes between the chambers.

However, many people who have family histories of congenital heart disease or known mutations have normal hearts, and older mothers usually have healthy children.

Eggs gone bad?

“In my lab, we are interested in understanding why certain individuals who are exposed to a known cause of congenital heart disease—whether genetic or environmental—escape the condition, and others don’t,” says senior author Patrick Y. Jay, associate professor of pediatrics at Washington University in St. Louis.

“We study mice with a mutation that increases the risk of heart defects. The mutation first was found in people. But not every mouse with the mutation gets a heart defect, just as in humans. For the past 10 years, we have been trying to figure out the genetic and environmental factors that might influence risk. Understanding them could help us develop a way to prevent heart defects despite exposure to a known cause.”

Mirroring observations in people, past work from Jay’s lab has shown that older mouse mothers tended to bear pups with higher rates of congenital heart defects when compared with younger mothers. Other variables in the laboratory mice, such as age of the father or litter size, are not associated with any difference in risk.

“Conventional wisdom says this increased risk seen for older mothers results from aging eggs,” Jay says. “Since all of a woman’s eggs were produced when she was an embryo, there’s this notion that over decades the eggs just go bad. But the evidence for this is pretty circumstantial. In humans, you can only show associations. You can’t establish causality.”

Ovary swap

To look at the question of aging eggs more carefully, researchers performed a relatively simple experiment, yet one that, to Jay’s knowledge, has not been reported previously.

Working in mice genetically prone to relatively high rates of congenital heart defects, the researchers took ovaries from older mothers and transplanted them into younger mothers.

Likewise, they took the ovaries of younger mothers and transplanted them into older mothers. They examined the offspring to determine if higher rates of heart defects tracked with the age of the mothers or the age of the ovaries.

“We discovered that the rates track exactly with the age of the mother,” Jay says.

In other words, young mice with old ovaries bore offspring with low rates of heart defects, similar to young mice with young ovaries. And older mice, even with young ovaries, bore offspring with higher rates of heart defects, similar to older mice with older ovaries.

“This is exciting from a prevention standpoint,” Jay says. “If there is something about the mother that is contributing to the risk, independent of the ovary, then we have a much better chance of altering that risk than we would if the problem were solely with aging eggs—simply because adults are easier to treat than eggs or embryos.”

Diet’s role?

In an effort to identify possible drivers of age-associated risk of congenital heart disease, researchers looked at diet.

“We knew that obesity and diabetes contribute to congenital heart disease in people and that the risk of these metabolic conditions goes up as you age,” Jay says. “So we put the mice on a high-fat diet.”

Despite becoming obese and diabetic, these mouse mothers did not have a greater risk of bearing offspring with increased heart defects. Still thinking that healthy metabolism was likely important for healthy developing embryos, the researchers then looked at exercise.

“We gave the mice access to running wheels, like you would find at a pet store,” he says. “And we just let the mothers run.”

Benefits of exercise

This time, the researchers found that risk of heart defects in offspring of older mothers dropped from about 20 percent for sedentary mothers to 10 percent for exercising mothers. They didn’t see a significant effect of exercise in the younger mothers, with rates staying at about 10 percent for them regardless of physical activity.

“In the babies of the old mothers who exercised, the incidence of heart defects goes down, but it does not go below the incidence of the young mothers,” Jay says. “There’s still a baseline level that we didn’t get past.”

Even so, cutting rates in half would be significant.

“If you can prevent even one heart defect, that can have a huge emotional and economic impact on a family,” Jay says. “While we’ve gotten very good at treating congenital heart defects, the surgeries don’t cure the patients. Now that so many have reached adulthood, we know they are coming back with heart failure, arrhythmias, and other difficult heart problems.”

A new conversation

While researchers don’t know how such data might translate to people, they showed that exercise did not have to be life-long to produce a measurable benefit. Older mouse mothers who exercised for at least three months prior to birth saw an effect similar to that seen in older mothers who had exercised since they were the equivalent of teenagers.

The benefit was observed with high-intensity physical activity by human standards. Mice like to run and, if given the opportunity, will do so for most of their waking hours.

Still, Jay says he and colleagues are pleased to have demonstrated the concept that a treatment or intervention focused on the mother can prevent disease in the offspring that carries the causal mutation.

“I hope this study will change the way investigators think about congenital heart disease,” he says. “Right now, the field is very focused on the embryo—finding genetic mutations and figuring out the biology to see how they affect cardiac development. That research is important and necessary, but this opens up a whole new conversation.”

The National Institutes of Health, the American Heart Association, the Lawrence J. & Florence A. DeGeorge Charitable Trust, the Children’s Discovery Institute of Washington University and St. Louis Children’s Hospital, and the Children’s Heart Foundation funded the work.

Source: Washington University in St. Louis

The post Older mouse mom exercise cuts baby’s heart risk appeared first on Futurity.

Puppet drama suggests toddlers ‘get’ suspense

Mon, 04/06/2015 - 07:51

Imagine someone watching a Hitchcock film—they might gasp, sit forward, open their eyes widely, and clench their hands.

A new study—done with puppets, not Hitchcock—detects similar responses in toddlers as young as two to three years old, younger than suggested by previous research.

The study has wider implications as to when children can imagine the state of mind of another person, says lead author Henrike Moll, director of University of Southern California’s Minds in Development Lab.

“We’re fighting this notion of childhood egocentrism.”

Moll’s study, in press at Developmental Science, challenges assumptions that young children are primarily egocentric, failing to understand views of the world that differ from their own.

“We know children identify with other people,” says Moll, assistant professor of psychology. “They’re moved and touched by what happens to others. They’re empathic from a young age. We capitalized on this empathic orientation to uncover children’s insights into the beliefs and expectations of others.”

Puppet drama

Moll and her colleagues used puppet shows to induce suspense in their young subjects. These simple stories involved, for example, a protagonist walking off stage after proudly presenting his belongings, such as a large stack of cookies, to the child. Then, an antagonist appeared and removed nearly all of the cookies, leaving their owner in for a rude surprise.

Suspense is “rooted in the awareness of a clash between one’s own knowledge and another’s false expectations,” Moll says.

When we go to the store and find it to be closed, we might be both surprised and disappointed—false expectations have an affective aftermath.

However, previous research on the subject took a highly rational approach while ignoring that emotional charge. Young children would be quizzed on their responses after watching similar mini-dramas. Moll argues that these studies overlooked the fact that children can feel these emotions before they can actually put them into words.

Her study took an integrative approach, combining interview questions along with videotaping children’s responses as they watched the drama unfold. The children’s facial expressions, along with other signs of tension, showed that they anticipated the character’s impending surprise and disappointment.

“In these expressions, toddlers affectively show their knowledge of what another thinks and feels,” Moll says. Even though they aren’t capable of expressing this knowledge in fully articulated sentences, their responses of suspense demonstrate their understanding of the situation.

“We really want to know when children start being able to understand another’s mind,” Moll says.

“There’s this idea that young kids are egocentric, that they’re locked into a perspective of the world and fail to understand what others are thinking. We’re fighting this notion of childhood egocentrism.”

Source: USC

The post Puppet drama suggests toddlers ‘get’ suspense appeared first on Futurity.

Plants bloom when temps hit ‘sweet spot’

Mon, 04/06/2015 - 06:42

As climate change brings increasingly earlier warm temperatures, the time that plants bud and bloom arrives earlier, too.

Plants have an ideal temperature for seed set—and then flower at a particular time of year to make sure they hit seed development just as the weather has warmed to a “sweet spot” temperature.

For a new study, researchers used computer models of Arabidopsis thaliana and discovered the plant’s ideal temperature is between 14-15˚C (58º F). Seeds that develop in temperatures lower than 14˚C will almost always remain dormant and fail to germinate.

Clever plants

An ideal temperature allows the mother plant to produce seeds with different growth strategies, increasing the chances that some of her progeny will complete another generation successfully.

But as the climate changes, the sweet spot for seeds comes earlier in the year, so first flowers bloom correspondingly earlier too.

The underlying principle of a very sensitive temperature sweet spot is likely to apply to many flowering plants. This would mean that certain plants have different flowering times due to different but equally narrow temperature sensitivity windows.

“It was amazing to realize that such a small change in temperature can make a big difference to the germination, and even more so that plants were timing their seed set to coincide with it even when the climate was altered,” says Vicki Springthorpe, a biology PhD student at University of York.

“It means that they produce a mixture of seeds, and it’s a clever way of maintaining a stable population in unpredictable growth conditions.

“We found that setting seed at the correct temperature is vital to ensure normal germination,” says Steven Penfield. “It seems that plants aim to flower not at a particular time of year, but when the optimal temperature for seed set is approaching.”

“If the climate warms, plants are clever enough to recognize this and adjust their flowering time accordingly and it feels like spring comes earlier in the year.”

Source: University of York

The post Plants bloom when temps hit ‘sweet spot’ appeared first on Futurity.

New imaging technology peeks into living cells

Mon, 04/06/2015 - 06:41

High-speed spectroscopic imaging makes it possible to observe what’s going on inside living cells and to image large areas of tissue, even an entire organ.

The vibrational spectroscopic imaging technology could allow for the early detection of cancer and other diseases.

“For example, we will be able to image the esophagus or urinary bladder for diagnosis of tumors,” says Ji-Xin Cheng, a professor in Purdue University’s Weldon School of Biomedical Engineering and chemistry department.

“If you were to take one millisecond per pixel, then it would take 10 minutes to obtain an image, and that’s too slow to see what’s happening in cells. Now we can take a complete scan in two seconds.”

The technology represents a new way to use stimulated Raman scattering to perform microsecond-speed vibrational spectroscopic imaging, which can identify and track certain molecules by measuring their vibrational spectrum with a laser, a sort of spectral fingerprint.

Findings appear in the journal Light: Science & Application.

The imaging technique is “label-free,” meaning it does not require samples to be marked with dyes, making it appealing for diagnostic applications. Another advantage of the new system is that it can be combined with another technique called flow cytometry to look at a million cells per second.

“You can look at large numbers of cells from a patient’s blood sample to detect tumors, for example, and you can also look directly at organs with an endoscope,” says Cheng, scientific director of the Label-free Imaging lab in the Birck Nanotechnology Center.

“These capabilities will change how people use Raman spectroscopy for medicine. There are many organelles in each cell, and spectroscopy can tell us what’s in the organelles, which is information not available by other techniques.”

As a proof of concept, the researchers demonstrated the new system by observing how human cancer cells metabolize vitamin A and how medications are distributed in the skin.

The technology, which is about 1,000 times faster than a state-of-the-art commercial Raman microscope, is made possible with an electronic device developed at the university’s Jonathan Amy Facility for Chemical Instrumentation called a 32-channel tuned amplifier array, or TAMP array.

Two patents have been issued for the new technology.

Cheng says he found the idea for this imaging technology by teaching undergraduates how the human ear amplifies sound. Circuits in the TAMP device do the same thing for optical signals, he says.

Source: Purdue University

The post New imaging technology peeks into living cells appeared first on Futurity.

Custom melanoma vaccines provoke T cells

Mon, 04/06/2015 - 06:13

Personalized melanoma vaccines can spark a powerful immune response against unique mutations in patients’ tumors, according to early data from a first-in-humans clinical trial.

The tailor-made vaccines, given to three patients with advanced melanoma, appeared to increase the number and diversity of cancer-fighting T cells responding to the tumors. The finding is a boost to cancer immunotherapy, a treatment strategy that unleashes the immune system to seek out and destroy cancer.

“This is about as personalized as vaccines can get”

In a new approach, the cancer vaccines were developed by first sequencing the genomes of patients’ tumors and samples of the patients’ healthy tissues to identify mutated proteins called neoantigens unique to the tumor cells.

Then, using computer algorithms and laboratory tests, the researchers were able to predict and test which of those neoantigens would be most likely to provoke a potent immune response and would be useful to include in a vaccine.

The vaccines were given to melanoma patients who had had surgery to remove their tumors but whose cancer cells had spread to the lymph nodes, an indicator the deadly skin cancer is likely to recur. These clinical findings set the stage for a phase I vaccine trial, approved by the Food and Drug Administration as part of an investigational new drug application. The trial will enroll six patients.

T-cell response

Data on the immune response seen in the first three patients are reported in the paper in Science Express. If additional testing in more patients indicates the vaccines are effective, they may one day be given to patients after surgery to stimulate the immune system to attack lingering cancer cells and prevent a recurrence.

“This proof-of-principle study shows that these custom-designed vaccines can elicit a very strong immune response,” says senior author Gerald Linette, a medical oncologist at at Washington University School of Medicine in St. Louis leading the clinical trial at Siteman Cancer Center and Barnes-Jewish Hospital.

“The tumor antigens we inserted into the vaccines provoked a broad response among the immune system’s killer T cells responsible for destroying tumors. Our results are preliminary, but we think the vaccines have therapeutic potential based on the breadth and remarkable diversity of the T-cell response.”

It’s too early to say whether the vaccines will be effective in the long term, the researchers caution. The study was designed to evaluate safety and immune response; none of the patients has experienced adverse side effects.

Personalized approach

Earlier attempts at vaccines have focused on targeting normal proteins commonly expressed at high levels in particular cancers. Those same proteins also are found in healthy cells, making it difficult to stimulate a potent immune response.

The new approach merges cancer genomics with cancer immunotherapy.

“This is about as personalized as vaccines can get,” says coauthor Elaine Mardis, co-director of the McDonnell Genome Institute at Washington University, where the cancer genome sequencing, analysis, and neoantigen prediction took place.

“The approach we describe is fundamentally different from conventional mutation discovery, which focuses on identifying mutated genes that drive cancer development. Instead, we’re looking for a unique set of mutated proteins in a patient’s tumor that would be most likely to be recognized by the immune system as foreign.”

Melanomas are notorious for having high numbers of genetic mutations caused by exposure to ultraviolet light. Biopsy samples of melanomas typically carry 500 or more mutated genes. Using prediction algorithms, the researchers narrowed their search for vaccine candidates by identifying neoantigens that not only were expressed in a patient’s tumor but also were likely to be seen by that patient’s immune system as “non-self.”

Neonantigen ‘flags’

Biochemical validation of neoantigen peptide expression on the cancer cells’ surfaces was performed in collaboration with William Hildebrand’s group at the University of Oklahoma Health Sciences Center and provided critical assurance that the vaccine would elicit the most effective T cells to combat the melanoma.

“You can think of a neoantigen as a flag on each cancer cell,” says first author Beatriz Carreno, associate professor of medicine. “Each patient’s melanoma can have hundreds of different flags. As part of validating candidate vaccine neoantigens, we were able to identify the flags on the patients’ cancer cells. Then we created customized vaccines to a select group of flags on each patient’s tumor.”

Carreno and her colleagues selected a set of seven unique neoantigens for each vaccine and used specialized immune cells called dendritic cells, derived from the patients, to carry those neoantigens to the immune system. Dendritic cells play an important role in waking up the immune system, reminding T cells to attack the cancer.

After the vaccine infusions, the patients’ blood was drawn every week for about four months. By analyzing the blood samples, the researchers could see that each patient mounted an immune response to specific neoantigens in their vaccines. The vaccines also stimulated diverse clones of battle-ready T cells against neoantigens, suggesting this approach also could be used to activate a range of T cells and target them to mutations in other cancers with high mutation rates, such as lung cancer, bladder cancer, and certain colorectal cancers.

“Our team has developed a new strategy for personalized cancer immunotherapy,” Linette says. “Many researchers have hypothesized that it would be possible to use neoantigens to broadly activate the human immune system, but we didn’t know that for sure until now. We still have much more work to do, but this is an important first step and opens the door to personalized immune-based cancer treatments.”

The Barnes-Jewish Hospital Foundation, Siteman Cancer Frontier Fund, Our Mark on Melanoma Foundation, Come Out Swinging Foundation, Blackout Melanoma Foundation, the National Cancer Institute, and the National Human Genome Research Institute at the National Institutes of Health (NIH) supported the work.

Source: Washington University in St. Louis

The post Custom melanoma vaccines provoke T cells appeared first on Futurity.

These edits could make taxpayers more honest

Fri, 04/03/2015 - 13:04

More direct language on tax forms, both in paper and online, could cut tax evasion in the United States, say researchers.

For example, a suggested question on a tax form could read: “Did you receive cash or other compensation from providing services directly to customers and/or as an independent contractor? You must answer ‘yes’ or ‘no.'”

Most current tax forms do not include such direct wording, writes Stanford University law professor Joseph Bankman in a new working paper. Even a few changes based on social psychology insights could generate more tax revenue and taxpayer compliance, he writes.

“The explosion of research in social psychology over the past few decades, along with industry experience with data-driven interactive systems, suggests a different approach to the problem: Redesign the tax forms and online filing process to elicit more truthful responses from taxpayers,” says Bankman. His coauthors include the late Clifford Nass, a professor of communication at Stanford, and Joel Slemrod, a professor of economics at the University of Michigan.

Tax evasion costs federal, state, and local governments more than $400 billion a year or about 17 percent of all taxes owed, according to the researchers. Typical tools to discourage evasion include audits, penalties, and third-party reporting. But these efforts, the authors suggest, have proven too expensive or politically unpopular.

To encourage greater compliance, Bankman and the others propose changes to tax forms.

Clear and direct

Tax forms—paper, electronic, and preparer-completed formats—could change to increase the “psychological cost of lying and the perceived risk of detection,” writes Bankman. Asking more direct questions would put the burden on taxpayers to give explicit answers. Today, the tax forms contain too many general questions that allow for deceptive answers, he says.

“The difference between these two alternatives can be thought of roughly as the difference between lying through commission and omission,” the researchers write.

Social scientists, notes Bankman, have found that lying is cognitively more difficult than truth-telling. It requires activation of additional parts of the brain, as well as the sympathetic nervous system.

Research also shows that people are generally “cognitive misers,” preferring to minimize cognitive activity like lying that requires great effort. Lying also produces “cognitive dissonance”—a negative state of mind caused by inconsistency in behavior.

Appeal to morality

The researchers suggest that taxpayers should swear under penalty of perjury to answer the questions honestly before filling out their forms, and they should be asked more detailed questions about income sources.

Bankman says that research by Stanford psychologist Benoît Monin and others in social psychology suggests a different approach to moral suasion, one that asks people to change their behavior to fit within a particular identity connected to a key decision to be made.

In other words, research shows that when people hear “please do not be a cheater” rather than “please do not cheat,” they’re less likely to cheat.

Bankman suggests including a similar phrase in the tax form’s perjury section or at the top of each page.

Online questioning

This approach is called “adaptive questioning,” writes Bankman, and is often used in e-commerce due to its efficiency. Similar to a chat session found on many web sites, the data approach would involve an IRS questioner already having information at their disposal about the filer.

“In the tax context, it would allow the IRS to ask more focused questions, which should reduce evasion and audit costs. It could also benefit taxpayers by reducing filing time and eliminating the risk of subsequent audit,” he says.

Bankman explains that an interaction that feels like a conversation is intrinsically more pleasant than a standardized tax form with less personal language.

“For example, if the interaction is framed as a set of questions, the taxpayer should have the ability to ask a question back, for example, ‘Why are you asking this?'” he says.

Bankman says he isn’t calling for more government intrusion or heavy regulation of tax filing, but more direct communication and the use of information available through a data-driven approach.

“The proposals described would make the government a smarter user of information and require that the taxpayer verify some forms of information. Potential benefits include reduced evasion, reduced compliance costs and (more speculatively) a better filing experience,” Bankman and the others write.

Tax evasion is harmful, they contend, because taxes that go uncollected have to be found in other sources—as in higher tax rates for compliant taxpayers and sectors of the economy less associated with tax dodging.

According to the authors, the highest compliance rates are associated with people who report job wages and investments, while the lowest are found among those who run individual businesses, such as moonlighting professionals, the self-employed, and independent contractors.

Source: Stanford University

The post These edits could make taxpayers more honest appeared first on Futurity.

Let’s tolerate a little inflation, says economist

Fri, 04/03/2015 - 11:44

The Federal Reserve should extend recent dramatic job growth by keeping US interest rates near zero and tolerating a little inflation, argues economist Laurence Ball.

By the end of 2015, forecasters say, the United States could hit the Fed’s definition of “maximum employment”—with an unemployment rate of roughly 5.2 to 5.5 percent. With this jobs goal in sight, the Fed is expected to begin raising interest rates to try to avoid overheating the economy and triggering inflation.

By holding off on interest rate hikes, however, Ball suggests, the central bank could drive the unemployment rate “well below 5 percent”—and bring even the long-term jobless back into the workforce.

‘High-pressure’ economy

“If policymakers would accept a modest overshoot of their inflation target, they could do more to reverse damage from the Great Recession,” writes Ball in a paper published by the Center for Budget and Policy Priorities. “If a recession leaves workers discouraged and detached from the labor force, a high-pressure economy with plentiful job opportunities could draw them back in.”

Such a “high-pressure” economy would likely cause only “modest and temporary” violation of the Fed’s inflation target and “any adverse effects would be slight compared to the gains in employment and output,” Ball says.

Recent US job growth has been good. The unemployment rate dipped in February and March to 5.5 percent, the lowest since before the 2008 financial crisis and well below the 10 percent recorded in October 2009, just after the recession ended and recovery began.

Also in February, the total of US nonfarm jobs rose by 295,000, with an average increase of 266,000 the prior 12 months, according to the Bureau of Labor Statistics. BLS reported a weaker gain of 126,000 jobs in March.

Missing jobs

Ball, professor of economics at Johns Hopkins University and a former visiting scholar at the Federal Reserve, contends that accepting an unemployment rate of 5.2 percent and an inflation rate of 2 percent, while safe and conventional, is not optimal policy for the United States right now.

To begin with, he says, the improving unemployment rate masks considerable persisting damage from the recession. Because unexpectedly large numbers of workers stopped even looking for jobs, and haven’t returned to the labor force, and still more workers are involuntarily working part-time, “millions of jobs lost during the Great Recession are not coming back in the return to normalcy envisaged by the Fed,” says Ball, who is also a research associate at the National Bureau of Economic Research.

“Today we are left with short-term unemployment near its natural state,” Ball writes, “but with a legacy of long-term unemployment and non-participation that will persist if policy is not sufficiently expansionary.”

The Fed doesn’t want inflation to rise to 2.5 percent or 3 percent, even temporarily. But Ball says there is little evidence that inflation even as high as 4 percent would significantly harm the economy—and would be a small price to pay for the resulting sustained jobs gains.

“The Fed should do everything it can to promote a high-pressure economy,” he writes, “not increase interest rates and choke off growth as soon as inflation threatens to rise.”

Source: Johns Hopkins University

The post Let’s tolerate a little inflation, says economist appeared first on Futurity.

Cancer drug restores memory in mice

Fri, 04/03/2015 - 11:30

When given an experimental cancer drug, mice with a model of Alzheimer’s disease had their memory and brain connections restored.

The drug, which had disappointing results in treating solid tumors, appears to block damage triggered during the formation of amyloid-beta plaques, a hallmark of Alzheimer’s disease, researchers say.

The new study, funded by a National Institutes of Health program that tests failed drugs on different diseases, has led to the launch of human trials to test the efficacy of the drug, AZD05030, in Alzheimer’s patients.

Speedy path to human trials

“With this treatment, cells under bombardment by beta amyloid plaques show restored synaptic connections and reduced inflammation, and the animal’s memory, which was lost during the course of the disease, comes back,” says Stephen M. Strittmatter, professor of neurology at Yale University and senior author of the study in Annals of Neurology.

In the last five years, scientists have developed a more complete understanding of the complex chain of events that leads to Alzheimer’s disease. The new drug blocks one of those molecular steps, activation of the enzyme FYN, which leads to the loss of synaptic connections between brain cells.

Several other steps in the disease process also have the potential to be targets for new drugs, Strittmatter says.

“The speed with which this compound moved to human trials validates our New Therapeutic Uses program model and serves our mission to deliver more treatments to more patients more quickly,” says Christopher P. Austin, director of NIH’s National Center for Advancing Translational Sciences, which funded the work.

Christopher H. van Dyck, a coauthor of the paper, and Strittmatter have initiated a multi-site clinical trial to determine whether the drug can also benefit Alzheimer’s patients.

More information on the trial is available at the Alzheimer’s Disease Cooperative Study website or at ClinicalTrials.gov.

The study was funded by the NCATS and the NIH Common Fund, through the Office of Strategic Coordination/Office of the NIH Director.

Source: Yale University

The post Cancer drug restores memory in mice appeared first on Futurity.

Nanocrystals can toughen up concrete

Fri, 04/03/2015 - 11:11

New research demonstrates that cellulose nanocrystals can increase the tensile strength of concrete by 30 percent.

The cellulose nanocrystals (CNCs) could be refined from industrial byproducts generated in the paper, bioenergy, agriculture, and pulp industries. They are extracted from structures called cellulose microfibrils, which help to give plants and trees their high strength, lightweight, and resilience.

“This is an abundant, renewable material that can be harvested from low-quality cellulose feedstocks already being produced in various industrial processes,” says Pablo Zavattieri, an associate professor in the Lyles School of Civil Engineering at Purdue University.

The cellulose nanocrystals might be used to create a new class of biomaterials with wide-ranging applications, such as strengthening construction materials and automotive components.

Getting hydrated

One factor limiting the strength and durability of today’s concrete is that not all of the cement particles are hydrated after being mixed, leaving pores and defects that hamper strength and durability.

“So, in essence, we are not using 100 percent of the cement,” Zavattieri says.

However, the researchers have discovered that the cellulose nanocrystals increase the hydration of the concrete mixture, allowing more of it to cure and potentially altering the structure of concrete and strengthening it. As a result, less concrete needs to be used.

The cellulose nanocrystals are about 3 to 20 nanometers wide by 50-500 nanometers long—or about 1/1,000th the width of a grain of sand—making them too small to study with light microscopes and difficult to measure with laboratory instruments.

The concrete was studied using several analytical and imaging techniques. Because chemical reactions in concrete hardening are exothermic, some of the tests measured the amount of heat released, indicating an increase in hydration of the concrete.

The researchers also hypothesized the precise location of the nanocrystals in the cement matrix and learned how they interact with cement particles in both fresh and hardened concrete. The nanocrystals were shown to form little inlets for water to better penetrate the concrete.

The findings appear in the journal Cement and Concrete Composites. Researchers from Purdue and the US Forest Service’s Forest Products Laboratory.

The National Science Foundation funded the research.

Source: Purdue University

The post Nanocrystals can toughen up concrete appeared first on Futurity.

These 6 symptoms predict Ebola risk

Fri, 04/03/2015 - 09:47

When deciding whether a sick patient belongs in an Ebola treatment unit (ETU), doctors want to be right because any misdiagnosis is terribly dangerous.

Returning an Ebola case to the community leaves a patient untreated and prolongs the epidemic, but admitting someone with a different illness exposes them to Ebola while in the ETU.

Adam Levine, assistant professor of emergency medicine at Brown University, spent last fall fighting Ebola in Bong County, Liberia. Using data from there, he and colleagues calculated a simple, sensitive, and specific Ebola Prediction Score for triaging a patient’s Ebola risk.

The Ebola Prediction Score appears in the journal Annals of Internal Medicine.

Levine discussed the score, which simplifies other systems that require parsing through algorithms based on 14 symptoms, with university writer David Orenstein.

How is a prediction score different from a more basic list of symptoms?

The problem with a very long list of symptoms is it takes a very long time to assess. In an epidemic setting where you have a lot of patients, having to go through a very long list of symptoms can be difficult, especially when you have to figure out how to translate each of those different symptoms into not only the local language but also into terminology people will understand. For instance, having people separate out pain in the joints from pain in the muscles can be difficult.

In addition, when you hire staff to work in the ETU, you have to train them on how to do triage properly. The more complicated the triage algorithm is the more difficult it is to train them and more errors are going to get made.

So the idea of a prediction score is to use statistics to pull out the small number of symptoms and signs that actually have the most predictive power—ideally the same amount of predictive power as the full list you started with.

What symptoms are components of the score?

In our logistic regression analysis, one of the symptoms that came out was “sick contact”—basically direct or indirect contact with somebody who had Ebola—which is not surprising. We would expect that to be a very strong predictor because Ebola is transmitted through body fluids.

The other symptoms were diarrhea, loss of appetite, muscle pains, difficulty swallowing, and a negative predictor: abdominal pain.

I can only conjecture that these were pulled out because these were the ones that were more strongly associated with Ebola and best separated patients with Ebola from those with other diseases. One common sign that everyone thinks about is fever. But fever is not very good at differentiating Ebola from other diseases like malaria or typhoid or even influenza that are really common in West Africa.

When you have patients presenting to your ETU, it’s because they are sick. You want to separate out the ones who have Ebola to keep in your ETU and then send the ones who have another disease someplace else for treatment.

As it turns out, these 6 symptoms have almost the same predictive power as the full 14, so nothing is lost by going from 14 symptoms to six.

How could having a score have helped at the ETU in Liberia?

Managing an ETU requires balancing the epidemiologic imperative of trying to find every last case in order to stop the epidemic against the ethical imperative to do no harm. There really perhaps is no greater harm that I think I’ve done in my career than admitting patients without Ebola to our ETU, which I did many, many times, because I put those patients at risk.

I especially worry about the pregnant women and the children that we admitted to our ETU who didn’t have Ebola and the risk that we put them at for contracting the disease.

In that setting it’s going to be one to three days minimum before you have confirmatory testing. In the meantime admitted patients are going to be kept in the suspect area. If you have a scoring system that can help you figure out who is high risk, who is medium risk, who is low risk, then you can separate them into different wards so they aren’t in contamination distance to catch Ebola from their neighbor. You could make sure they have different latrines for instance.

That’s the goal. It’s not necessarily more sensitive than the other algorithms out there but what it does is it allows us to risk-stratify patients.

How does the EPS compare to other Ebola diagnostic tools?

The two main algorithms are the ones developed by Doctors Without Borders and the WHO. Both those algorithms were developed based on expert opinion. They were never derived or validated empirically.

We showed in our study that the WHO algorithm is actually pretty sensitive for picking up Ebola. It’s not very specific, but it’s pretty sensitive, which is its main goal. The problem with them is that they are very difficult to apply. They have a lot of  ‘this plus this,’ or ‘this plus this.’ If you are trying to make sure that somebody applies this algorithm correctly every single time, it’s very complicated to do it.

A simple scoring system is a little bit more intuitive. You just have a list of symptoms on a page with the points assigned and people just add up the points and they know whether the person is likely to have Ebola and how likely are they.

How optimistic are you about the development of a rapid blood test?

There was one that was just approved by WHO. It’s not perfectly sensitive or specific. It doesn’t eliminate the need to admit a patient to a suspect ward and wait for confirmatory testing. It would be ideal to have a highly sensitive assay that could be done at point of care, just a drop of blood on a small piece of plastic. Then within a minute you have an answer, sort of like a pregnancy test. This type of rapid diagnostic test has been in use now for about 10 years with malaria and it has really revolutionized care.

But even if all you are doing is getting a drop of someone’s blood, that’s still going to require you dressing up in full personal protective equipment. It’s going to require a lot of human resources and lot of material resources and a lot of training just to draw that drop of blood.

Having some sort of prediction score can help identify which patients don’t even need the rapid test because they are so unlikely to have Ebola and those who should have the rapid test because they may have Ebola, and perhaps those who are so likely to have Ebola that you should just admit them and do the confirmatory testing because the rapid test is not going to be sensitive enough to rule it out.

What are the next steps?

Our data was collected from a single ETU. There are several studies starting to come out now from individual ETUs around West Africa. These are important and we are learning a lot from them. What would be even more impactful is if we could pool data together from multiple ETUs across West Africa. That would give us more statistical power to not only find what are the variables that are most predictive of Ebola, but also the variables that are most predictive of mortality or survival and what types of treatments tend to work better than others.

That requires a lot of different organizations coming together to collaborate and to share data, which is not something that happens frequently after humanitarian emergencies. But I’m happy to say that I’m involved in a process right now whereby a number of these different organizations that ran ETUs — including my own organization, International Medical Corps—are starting to talk about how we can pool our data.

Source: Brown University

The post These 6 symptoms predict Ebola risk appeared first on Futurity.


« Back