SULAIR Home

Futurity.org

Syndicate content
Research news from top universities.
Updated: 22 min 3 sec ago

Can your spouse’s personality get you a raise?

Mon, 09/22/2014 - 07:28

A new study shows that your spouse’s personality traits—particularly conscientiousness—have an impact on your career success.

“Our study shows that it is not only your own personality that influences the experiences that lead to greater occupational success, but that your spouse’s personality matters too,” says lead author Joshua Jackson, assistant professor of psychology at Washington University in St. Louis.

Related Articles On Futurity

“The experiences responsible for this association are not likely isolated events where the spouse convinces you to ask for a raise or promotion,” Jackson says.

“Instead, a spouse’s personality influences many daily factors that sum up and accumulate across time to afford one the many actions necessary to receive a promotion or a raise.”

Forthcoming in the journal Psychological Science, the findings are based on a five-year study of nearly 5,000 married people ranging in age from 19 to 89, with both spouses working in about 75 percent of the sample.

Jackson and coauthor Brittany Solomon, a graduate student in psychology, analyzed data on study participants who took a series of psychological tests to assess their scores on five broad measures of personality—openness, extraversion, agreeableness, neuroticism, and conscientiousness.

In an effort to gauge whether these spousal personality traits might be seeping into the workplace, they tracked the on-the-job performance of working spouses using annual surveys designed to measure occupational success—self-reported opinions on job satisfaction, salary increases, and the likelihood of being promoted.

Workers who scored highest on measures of occupational success tended to have a spouse with a personality that scored high for conscientiousness, and this was true whether or not both spouses worked and regardless of whether the working spouse was male or female, the study found.

Three perks of a conscientious spouse

Jackson and Solomon also tested several theories for how a spouse’s personality traits, especially conscientiousness, might influence their partner’s performance in the workplace. Their findings suggest that having a conscientious spouse contributes to workplace success in three ways.

First, through a process known as outsourcing, the working spouse may come to rely on his or her partner to handle more of the day-to-day household chores, such as paying bills, buying groceries, and raising children.

Workers also may be likely to emulate some of the good habits of their conscientious spouses, bringing traits such as diligence and reliability to bear on their own workplace challenges.

Finally, having a spouse that keeps your personal life running smoothly may simply reduce stress and make it easier to maintain a productive work-life balance.

While previous research with romantic partners has shown that a bad experience in one social context can bleed over into another (a bad day at work can lead to a grumpy spouse and a tense night at home, for example), this study goes beyond this to suggest that these sort of patterns exist day-in and day-out, exerting a subtle, but important influence on our performance in environments far removed from our home lives and our spouses.

The findings, they suggest, also have interesting implications for how we go about choosing romantic partners.

While previous research suggests that people seeking potential mates tend to look for partners who score high on agreeableness and low on narcissism, this study suggests that people with ambitious career goals may be better served to seek supportive partners with highly conscientious personalities.

“This is another example where personality traits are found to predict broad outcomes like health status or occupational success, as in this study,” Jackson says.

“What is unique to this study is that your spouse’s personality has an influence on such important life experiences.”

Source: Washington University in St. Louis

The post Can your spouse’s personality get you a raise? appeared first on Futurity.

Some meditation sparks better brain performance

Mon, 09/22/2014 - 06:47

Different types of meditation have qualitatively different effects on the mind and body, report researchers. Whereas the Vajrayana style of Buddhist meditation produces an arousal response, the Theravada style produces a relaxation response.

In particular, the research team found that Vajrayana meditation, which is associated with Tibetan Buddhism, can lead to enhancements in cognitive performance.

Previous studies had defined meditation as a relaxation response and had attempted to categorize meditation as either involving focused or distributed attentional systems.

Neither of these hypotheses received strong empirical support, and most of the studies focused on Theravada meditative practices.

Four kinds of meditation

Associate Professor Maria Kozhevnikov and Dr Ido Amihai of the National University of Singapore’s psychology department examined four different types of meditative practices: two types of Vajrayana meditations (Tibetan Buddhism) practices (visualization of self-generation-as-Deity and Rig-pa) and two types of Theravada practices (Shamatha and Vipassana).

Related Articles On Futurity

They collected electrocardiographic (EKG) and electroencephalographic (EEG) responses and also measured behavioral performance on cognitive tasks using a pool of experienced Theravada practitioners from Thailand and Nepal, as well as Vajrayana practitioners from Nepal.

They observed that physiological responses during the Theravada meditation differ significantly from those during the Vajrayana meditation.

Theravada meditation produced enhanced parasympathetic activation (relaxation). In contrast, Vajrayana meditation did not show any evidence of parasympathetic activity but showed an activation of the sympathetic system (arousal).

The researchers also observed an immediate, dramatic increase in performance on cognitive tasks following only Vajrayana styles of meditation. They note that such dramatic boost in attentional capacity is impossible during a state of relaxation.

Their results show that Vajrayana and Theravada styles of meditation are based on different neurophysiological mechanisms, which give rise to either an arousal or relaxation response.

A competition strategy?

The findings from the study, published in PLOS ONE, show that Vajrayana meditation can lead to dramatic enhancement in cognitive performance, suggesting that Vajrayana meditation could be especially useful in situations where it is important to perform at one’s best, such as during competition or states of urgency.

On the other hand, Theravada styles of meditation are an excellent way to decrease stress, release tension, and promote deep relaxation.

After seeing that even a single session of Vajrayana meditation can lead to radical enhancements in brain performance, Kozhevnikov and Amihai will be investigating whether permanent changes could occur after long-term practice.

The researchers are also looking at how non-practitioners can benefit from such meditative practices.

“Vajrayana meditation typically requires years of practice, so we are also looking into whether it is also possible to acquire the beneficial effects of brain performance by practicing certain essential elements of the meditation,” says Kozhevnikov.

“This would provide an effective and practical method for non- practitioners to quickly increase brain performance in times of need.”

Source: National University of Singapore

The post Some meditation sparks better brain performance appeared first on Futurity.

New polymer makes solar cells more efficient

Mon, 09/22/2014 - 04:58

Solar cells made from polymers have the potential to be cheap and lightweight, but scientists are struggling to make them generate electricity efficiently.

A polymer is a type of large molecule that forms plastics and other familiar materials.

“The field is rather immature—it’s in the infancy stage,” says Luping Yu, a professor in chemistry at the University of Chicago.

(Credit: Andrew Nelles)

Now a team of researchers led by Yu has identified a new polymer that allows electrical charges to move more easily through the cell, boosting electricity production.

“Polymer solar cells have great potential to provide low-cost, lightweight, and flexible electronic devices to harvest solar energy,” says Luyao Lu, a graduate student in chemistry and lead author of a paper in the journal Nature Photonics that describes the result.

The active regions of such solar cells are composed of a mixture of polymers that give and receive electrons to generate electrical current when exposed to light. The new polymer developed by Yu’s group, called PID2, improves the efficiency of electrical power generation by 15 percent when added to a standard polymer-fullerene mixture.

“Fullerene, a small carbon molecule, is one of the standard materials used in polymer solar cells,” Lu says. “Basically, in polymer solar cells we have a polymer as electron donor and fullerene as electron acceptor to allow charge separation.”

In their work, the researchers added another polymer into the device, resulting in solar cells with two polymers and one fullerene.

8.2 percent efficiency

The group achieved an efficiency of 8.2 percent when an optimal amount of PID2 was added—the highest ever for solar cells made up of two types of polymers with fullerene—and the result implies that even higher efficiencies could be possible with further work.

Related Articles On Futurity

The group, which includes researchers at the Argonne National Laboratory, is now working to push efficiencies toward 10 percent, a benchmark necessary for polymer solar cells to be viable for commercial application.

The result was remarkable not only because of the advance in technical capabilities, Yu notes, but also because PID2 enhanced the efficiency via a new method. The standard mechanism for improving efficiency with a third polymer is by increasing the absorption of light in the device.

How it works

But in addition to that effect, the team found that when PID2 was added, charges were transported more easily between polymers and throughout the cell.

In order for a current to be generated by the solar cell, electrons must be transferred from polymer to fullerene within the device. But the difference between electron energy levels for the standard polymer-fullerene is large enough that electron transfer between them is difficult. PID2 has energy levels in between the other two, and acts as an intermediary in the process.

“It’s like a step,” Yu says. “When it’s too high, it’s hard to climb up, but if you put in the middle another step then you can easily walk up.”

The addition of PID2 caused the polymer blend to form fibers, which improve the mobility of electrons throughout the material. The fibers serve as a pathway to allow electrons to travel to the electrodes on the sides of the solar cell.

“It’s like you’re generating a street and somebody that’s traveling along the street can find a way to go from this end to another,” Yu explains.

To reveal this structure, Wei Chen of the Materials Science Division at Argonne National Laboratory and the Institute for Molecular Engineering performed X-ray scattering studies using the Advanced Photon Source at Argonne and the Advanced Light Source at Lawrence Berkeley.

“Without that it’s hard to get insight about the structure,” Yu says. “That benefits us tremendously.”

“This knowledge will serve as a foundation from which to develop high-efficiency organic photovoltaic devices to meet the nation’s future energy needs,” Chen adds.

The National Science Foundation, Air Force Office of Scientific Research, and US Department of Energy funded the research.

Source: University of Chicago

The post New polymer makes solar cells more efficient appeared first on Futurity.

Puerto Ricans who inject drugs at high risk for HIV

Mon, 09/22/2014 - 03:05

Puerto Rican people who inject drugs and live in the Northeast United States and in Puerto Rico are among Latinos at the highest risk of contracting HIV.

A new study suggests that the development of a coordinated, multi-region campaign should be integrated into any effort to end the HIV/AIDS epidemic.

Higher HIV risk behaviors and prevalence have been reported among Puerto Rican people who inject drugs since early in the HIV epidemic.

The new research, published online in the American Journal of Public Health, describes the epidemic and the availability of HIV prevention and treatment programs in areas with a high concentration of Puerto Ricans, in order to provide recommendations to reduce HIV in the population.

“We reviewed HIV-related data for PRPWID living in Puerto Rico and Northeastern US, which contains the highest concentration of Puerto Ricans out of any US region,” says Sherry Deren, senior research scientist at New York University College of Nursing, and director of the Center for Drug Use and HIV Research.

“Injection drug use as a risk for HIV continues to be over-represented among Puerto Ricans. Lower availability of HIV prevention tools (syringe exchange and drug treatment) and ART treatment challenges, for PWID in PR, contribute to higher HIV risk and incidence for PRPWID in both locations.”

Most new infections in US Northeast

In 2010, the Northeast had the highest reported rates of new AIDS diagnoses, with Hispanics accounting for 27 percent of those diagnosed in the region.

Related Articles On Futurity

Furthermore, 48.7 percent of Hispanics in the US with a diagnosis of HIV were located in the Northeast.  The Northeast also had more new infections attributed to injection drug use (15.8 percent) than other regions of the US (8.8 percent).  As prevalent as this was in the Northeast, in 2010 the rate of HIV diagnoses attributed to injection drug use in Puerto Rico was more than two times greater (20.4 percent) than in the rest of the US.

Despite the fact Puerto Rican people comprise only 9 percent of the US Hispanic population, nearly 23 percent of HIV cases among Hispanics are among those born in Puerto Rico. Researchers also note that heterosexual HIV transmission has now surpassed injecting-related HIV transmission in Puerto Rico (40.7 percent versus 20.4 percent).

“Controlling heterosexual transmission of HIV will require controlling HIV infection among people who inject drugs, as those who inject drugs and are sexually active will serve as a continuing reservoir for future heterosexual transmission if injecting-related HIV transmission is not brought under control,” Deren says.

Needle exchange

Syringe exchange and drug treatment programs are the two primary methods responsible for reducing the rate of infection among people who inject drugs. The efficacy of such programs in reducing HIV transmission is well established, but evidence indicates that these services are much less available in Puerto Rico.

In a 2011 survey of syringe exchange programs in the Northeast and Puerto Rico, the researchers found that the annual budget for such programs in the Northeast averaged more than $400,000, more than five times of those in Puerto Rico where the average budget is $80,000.

“The differences in the annual budgets have very important implications for reducing HIV transmission and other health problems among people who inject drugs,” Deren says.  “Larger budgets for such programs allows for a greater number of syringes to be exchanged, and for programs to offer other services in addition to the exchange, such as HIV screenings.”

Continuing the ban on the use of US federal funds towards needle exchanges contributes to the inability to add such public health programs in Puerto Rico.  Additionally, while there are still gaps in drug treatment program availability across the US, Puerto Rico has a narrower range of such services.

“In light of the lack of available resources in Puerto Rico, many individuals migrate to the Northeast seeking drug treatment,” Deren says.  “Many of those coming to the Northeast, however, do not become engaged in evidence-based drug treatment.”

To deal with the problem, the researchers call for the development of a federally supported Northeast/Puerto Rico collaborative initiative and emphasize the need for the development and implementation of culturally appropriate HIV prevention interventions.

The National Institute on Drug Abuse supported the study.

Source: New York University

The post Puerto Ricans who inject drugs at high risk for HIV appeared first on Futurity.

Exercise added to chemo shrinks tumors faster

Mon, 09/22/2014 - 02:47

Adding exercise to a regimen of chemotherapy shrinks tumors more than chemotherapy alone, according to a study with mice.

Exercise has long been recommended to cancer patients for its physical and psychological benefits. For a new study, published in the American Journal of Physiology, researchers were particularly interested in testing whether exercise could protect against the negative cardiac-related side effects of the common cancer drug doxorubicin.

Related Articles On Futurity

Though effective at treating a variety of types of cancer, doxorubicin is known to damage heart cells, which could lead to heart failure in the long-term.

“The immediate concern for these patients is, of course, the cancer, and they’ll do whatever it takes to get rid of it,” says senior author Joseph Libonati, associate professor in the School of Nursing at the University of Pennsylvania.

“But then when you get over that hump you have to deal with the long-term elevated risk of cardiovascular disease.”

Previous studies had shown that an exercise regime prior to receiving chemotherapy could protect heart cells from the toxic effects of doxorubicin, but few had looked to see whether an exercise regimen during chemotherapy could be beneficial.

To do so, Libonati’s team set up an experiment with four groups of mice. All were given an injection of melanoma cells in the scruffs of their neck. During the next two weeks, two of the groups received doxorubicin in two doses while the other two groups received placebo injections. Mice in one of the treated groups and one of the placebo groups were put on exercise regimens, walking 45 minutes five days a week on mouse-sized treadmills, while the rest of the mice remained sedentary.

‘Amazing’ results

After the two-week trial, the researchers examined the animals’ hearts using echocardiogram and tissue analysis. As expected, doxorubicin was found to reduce the heart’s function and size and increased fibrosis—a damaging thickening of tissue. Mice that exercised were not protected from this damage.

Related Articles On Futurity

“We looked, and the exercise didn’t do anything to the heart—it didn’t worsen it, it didn’t help it,” Libonati says. “But the tumor data—I  find them actually amazing.”

The “amazing” result was that the mice that both received chemotherapy and exercised had significantly smaller tumors than mice that only received doxorubicin after two weeks.

Further studies will investigate exactly how exercise enhances the effect of doxorubicin, but the researchers believe it could be in part because exercise increases blood flow to the tumor, bringing with it more of the drug in the bloodstream.

“If exercise helps in this way, you could potentially use a smaller dose of the drug and get fewer side effects,” Libonati says. Gaining a clearer understanding of the many ways that exercise affects various systems of the body could also pave the way for developing drugs that mimic the effects of exercise.

“People don’t take a drug and then sit down all day,” he says. “Something as simple as moving affects how drugs are metabolized. We’re only just beginning to understand the complexities.”

The National Cancer Institute, National Heart Lung and Blood Institute, Biobehavioral Research Center at Penn, National Center for Research Resources, and National Center for Advancing Translational Sciences supported the study.

Source: University of Pennsylvania

The post Exercise added to chemo shrinks tumors faster appeared first on Futurity.

Brains grow and shrink like ‘rainbows’ as we age

Mon, 09/22/2014 - 02:33

Researchers have used a new magnetic resonance imaging technique to show, for the first time, how human brain tissue changes throughout life. They say a normal curve is shaped like a rainbow.

Nerve bundles in the brain increase in volume until we turn 40 and then—like the rest of our body—slowly start to deteriorate. By the end of our lives, the tissue in our brain is about the volume of a 7-year-old child.

Knowing what’s normal at different ages, doctors can image a patient’s brain, compare it to the standard curve, and be able to tell if a person is out of the normal range, much like the way a growth chart can help identify kids who have fallen below their growth curve.

Scientists looked at 24 brain regions to see how the composition changed from age 7 to 83. The regions in red changed the most; regions in blue changed the least. (Credit: Wandell Lab)

The technique has already been used to identify previously overlooked changes in the brain of people with multiple sclerosis.

“This allows us to look at people who have come into the clinic, compare them to the norm and potentially diagnose or monitor abnormalities due to different diseases or changes due to medications,” says Jason Yeatman, a graduate student in psychology at Stanford University and first author on a paper published in Nature Communications.

Problem with MRI

For decades scientists have been able to image the brain using magnetic resonance imaging (MRI) and detect tumors, brain activity, or abnormalities in people with some diseases, but those measurements were all subjective. A scientist measuring some aspect of the brain in one lab couldn’t directly compare findings with someone in another lab. And because no two scans could be compared, there was no way to look at a patient’s image and know whether it fell outside the normal range.

“A big problem in MRI is variation between instruments,” says research associate Aviv Mezer, senior author of the paper. Last year Mezer and Brian Wandell, professor of psychology, helped develop a technique to compare MRI scans quantitatively between labs. That work is described in the journal Nature Medicine.

“Now with that method we found a way to measure the underlying tissue and not the instrumental bias. So that means that we can measure 100 subjects here and Jason can measure another 100 in Seattle (where he is now a postdoctoral fellow) and we can put them all in a database for the community.”

White matter matters

The technique the team had developed measures the amount of white matter tissue in the brain. That amount of white matter comes primarily from an insulating covering called myelin that allows nerves to fire most efficiently and is a hallmark of brain maturation, though the white matter can also be composed of other types of cells in the brain.

White matter plays a critical role in brain development and decline, and several diseases including schizophrenia and autism are associated with white matter abnormalities. Despite its importance in normal development and disease, no metric existed for determining whether any person’s white matter fell within a normal range, particularly if the people were imaged on different machines.

The researchers decided to use the newly developed quantitative technique to develop a normal curve for white matter levels throughout life. They imaged 24 regions within the brains of 102 people ages 7 to 85, and from that established a set of curves showing the increase and then eventual decrease in white matter in each of the 24 regions throughout life.

What they found is that the normal curve for brain composition is rainbow-shaped. It starts and ends with roughly the same amount of white matter and peaks between ages 30 and 50. But each of the 24 regions changes a different amount. Some parts of the brain, like those that control movement, are long, flat arcs, staying relatively stable throughout life.

Others, like the areas involved in thinking and learning, are steep arches, maturing dramatically and then falling off quickly. (The group points out that their samples started at age 7 and a lot of brain development had already occurred.)

Outside the normal curve

“Regions of the brain supporting high-level cognitive functions develop longer and have more degradation,” Yeatman says. “Understanding how that relates to cognition will be really important and interesting.”

Yeatman is now a postdoctoral scholar at the University of Washington, and Mezer is now an assistant professor at the Hebrew University of Jerusalem. They plan to continue collaborating with each other and with other members of the Wandell lab, looking at how brain composition correlates with learning and how it could be used to diagnose diseases, learning disabilities or mental health issues.

The group has already shown that they can identify people with multiple sclerosis (MS) as falling outside the normal curve. People with MS develop what are known as lesions—regions in the brain or spinal cord where myelin is missing. In the new paper, the team showed that they could identify people with MS as being off the normal curve throughout regions of the brain, including places where there are no visible lesions. This could provide an alternate method of monitoring and diagnosing MS, they say.

Wandell has had a particular interest in studying the changes that happen in the brain as a child learns to read. Until now, if a family brought a child into the clinic with learning disabilities, Wandell and other scientists had no way to diagnose whether the child’s brain was developing normally, or to determine the relationship between learning delays and white matter abnormalities.

“Now that we know what the normal distribution is, when a single person comes in you can ask how their child compares to the normal distribution. That’s where this is headed,” Wandell says.

Source: Stanford University

The post Brains grow and shrink like ‘rainbows’ as we age appeared first on Futurity.

Does stigma keep same-sex couples from talking about abuse?

Fri, 09/19/2014 - 08:53

Domestic violence occurs at least as frequently, and likely even more so, between same-sex couples, but researchers suspect it may often go unreported because of the stigma attached to sexual orientation.

When analyzed together, previous studies indicate that domestic violence affects 25 percent to 75 percent of lesbian, gay, and bisexual people. However, the lack of representative data and under-reporting paints an incomplete picture of the true landscape, suggesting even higher rates.

Related Articles On Futurity

An estimated one in four heterosexual women experience domestic abuse, with rates significantly lower for heterosexual men.

“Evidence suggests that the minority stress model may explain these high prevalence rates,” says senior author Richard Carroll, associate professor in psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine and a psychologist at Northwestern Memorial Hospital.

“Domestic violence is exacerbated because same-sex couples are dealing with the additional stress of being a sexual minority. This leads to reluctance to address domestic violence issues.”

Domestic violence—sometimes called intimate partner violence—is physical, sexual, or psychological harm occurring between current or former intimate partners. Research concerning the issue began in the 1970s in response to the women’s movement, but traditionally studies focused on women abused by men in opposite-sex relationships.

Fear of discrimination

“There has been a lot of research on domestic violence but it hasn’t looked as carefully at the subgroup of same-sex couples,” Carroll says. “Another obstacle is getting the appropriate samples because of the stigma that has been attached to sexual orientation. In the past, individuals were reluctant to talk about it.”

Of the research that has examined same-sex domestic violence, most has concentrated on lesbians rather than gay men and bisexuals. “Men may not want to see themselves as the victim, to present themselves as un-masculine and unable to defend themselves,” Carroll says.

Homosexual men and women may not report domestic violence for fear of discrimination and being blamed for abuse from a partner, he says. They also may worry about their sexual orientation being revealed before they’re comfortable with it.

Mental health services for people involved in abusive same-sex relationships are becoming more common, but this population still faces obstacles in accessing help, reports the paper.

“We need to educate health care providers about the presence of this problem and remind them to assess for it in homosexual relationships, just as they would for heterosexual patients,” Carroll says.

“The hope is that with increasingly deeper acceptance, the stress and stigma will disappear for these individuals so they can get the help they need.”

The review is published in the Journal of Sex & Marital Therapy. Colleen Stiles-Shields, a student in the clinical psychology PhD program, is the study’s first author.

Source: Northwestern University

The post Does stigma keep same-sex couples from talking about abuse? appeared first on Futurity.

Berry extract added to chemo kills pancreatic cancer

Fri, 09/19/2014 - 08:32

A chemotherapy drug was more effective at killing pancreatic cancer cells when an extract from chokeberries was added to the mix.

Chokeberry is a wild berry that grows on the eastern side of North America in wetlands and swamp areas.

The berry is high in vitamins and antioxidants, including various polyphenols—compounds that are believed to mop up the harmful by-products of normal cell activity.

The researchers chose to study the impact of the extract on pancreatic cancer because of its persistently dismal prognosis: less than 5 percent of patients are alive five years after their diagnosis.

No effect on healthy cells

The study used a well-known line of pancreatic cancer cells (AsPC-1) in the laboratory and assessed how well this grew when treated with either the chemotherapy drug gemcitabine or different levels of commercially available chokeberry extract alone, and when treated with a combination of gemcitabine and chokeberry extract.

The analysis indicated that 48 hours of chokeberry extract treatment of pancreatic cancer cells induced cell death at 1 ug/ml. The researchers reported their results in the Journal of Clinical Pathology.

The toxicity of chokeberry extract on normal blood vessel lining cells was tested and found to have no effects up to the highest levels used (50 ug/ml), suggesting that the cell death effect is happening in a way other than through preventing new blood vessel formation (anti-angiogenesis), a process that is important in cancer cell growth.

“These are very exciting results. The low doses of the extract greatly boosted the effectiveness of gemcitabine, when the two were combined,” says Bashir A. Lwaleed, a lecturer at the University of Southampton.

Brain cancer, too?

“In addition, we found that lower doses of the conventional drug were needed, suggesting either that the compounds work together synergistically, or that the extract exerts a “supra-additive” effect. This could change the way we deal with hard to treat cancers in the future. ”

More clinical trials are needed to explore the potential of naturally occurring micronutrients in plants, such as those found in chokeberry, Lwaleed says.

Similar experimental studies, indicating that chokeberry extract seems to induce cell death and curb invasiveness in brain cancer, as well as other research, highlighting the potential therapeutic effects of particular polyphenols found in green tea, soya beans, grapes, mulberries, peanuts, and turmeric, show potential, Lwaleed adds.

“The promising results seen are encouraging and suggest that these polyphenols have great therapeutic potential not only for brain tumors but pancreatic cancer as well,” says Harcharan Rooprai of King’s College Hospital.

The Ministry of Higher Education, Malaysia, and Have a Chance Inc. funded the study.

Source: University of Southampton

The post Berry extract added to chemo kills pancreatic cancer appeared first on Futurity.

Just a little bit of dairy may cut your risk of stroke

Fri, 09/19/2014 - 08:01

One serving of milk or cheese every day may be enough to ward off heart disease or stroke, even in communities where dairy is not a traditional part of the diet.

A study of nearly 4,000 Taiwanese looked at the role an increased consumption of dairy foods had played in the country’s gains in health and longevity.

Related Articles On Futurity

“In a dominantly Chinese food culture, unaccustomed to dairy foods, consuming them up to seven times a week does not increase mortality and may have favorable effects on stroke,” says Mark Wahlqvist, professor of epidemiology and preventive medicine at Monash University.

Cancer and cardiovascular disease are the leading causes of death among Taiwanese.

When Wahlqvist began his study in 1993, there was little apparent concern about dairy foods, in contrast to a current belief that they may be harmful to health and in particular raise the risk of cancer.

Published in the Journal of the American College of Nutrition, the study shows such fears to be unfounded.

5 servings a week

“We observed that increased dairy consumption meant lower risks of mortality from cardiovascular disease, especially stroke, but found no significant association with the risk of cancer,” Wahlqvist says.

Milk and other dairy foods are recognized as providing a broad spectrum of nutrients essential for human health. According to the study findings, people only need to eat small amounts to gain the benefits. “A little is beneficial and a lot is unnecessary,” Wahlqvist says.

“Those who ate no dairy had higher blood pressure, higher body mass index, and greater body fatness generally than other groups. But Taiwanese who included dairy food in their diet only three to seven times a week were more likely to survive than those who ate none.”

For optimal results, the key is daily consumption of dairy foods—but at the rate of about five servings over a week. One serving is the equivalent to eight grams of protein: a cup of milk, or 45 grams of cheese. Such quantities rarely cause trouble even for people considered to be lactose intolerant.

Researchers from the National Health Research Institutes and National Defence Medical Centre in Taiwan contributed to the study.

Source: Monash University

The post Just a little bit of dairy may cut your risk of stroke appeared first on Futurity.

Will ‘VIP’ protect us from the flu and HIV and malaria?

Fri, 09/19/2014 - 07:46

Efforts to develop broadly effective vaccines for HIV and malaria have had limited success. Now scientists are testing a totally different approach—one that so far seems very promising.

Unlike vaccines, which introduce substances such as antigens into the body hoping to illicit an appropriate immune response, the new method provides the body with step-by-step instructions for producing specific antibodies shown to neutralize a particular disease.

The method is called vectored immunoprophylaxis, or VIP. The technique was so successful in triggering an immune response to HIV in mice that it has since been applied to a number of other infectious diseases.

“It is enormously gratifying to us that this technique can have potentially widespread use for the most difficult diseases that are faced particularly by the less developed world,” says Nobel Laureate David Baltimore, president emeritus and a biology professor at the California Institute of Technology (Caltech).

How VIP treatment works

VIP relies on the prior identification of one or more antibodies that are able to prevent infection in laboratory tests by a wide range of isolated samples of a particular pathogen.

Related Articles On Futurity

Once that has been done, researchers can incorporate the genes that encode those antibodies into an adeno-associated virus (AAV), a small, harmless virus that has been useful in gene-therapy trials. When the AAV is injected into muscle tissue, the genes instruct the muscle tissue to generate the specified antibodies, which can then enter the circulation and protect against infection.

In 2011, the Baltimore group reported in Nature that they had used the technique to deliver antibodies that effectively protected mice from HIV infection. Alejandro Balazs was lead author on that paper and was a postdoctoral scholar in the Baltimore lab at the time.

“We expected that at some dose, the antibodies would fail to protect the mice, but it never did—even when we gave mice 100 times more HIV than would be needed to infect seven out of eight mice,” says Balazs, now at the Ragon Institute of MGH, MIT, and Harvard. “All of the exposures in this work were significantly larger than a human being would be likely to encounter.”

At the time, the researchers noted that the leap from mice to humans is large but said they were encouraged by the high levels of antibodies the mice were able to produce after a single injection and how effectively the mice were protected from HIV infection for months on end.

Baltimore’s team is now working with a manufacturer to produce the materials needed for human clinical trials that will be conducted by the Vaccine Research Center at the National Institutes of Health.

The flu

Moving on from HIV, the Baltimore lab’s next goal was protection against influenza A. Although reasonably effective influenza vaccines exist, each year more than 20,000 deaths, on average, are the result of seasonal flu epidemics in the United States.

We are encouraged to get flu shots every fall because the influenza virus is something of a moving target—it evolves to avoid resistance. There are also many different strains of influenza A (e.g. H1N1 and H3N2), each incorporating a different combination of the various forms of the proteins hemagglutinin (H) and neuraminidase (N).

To chase this target, the vaccine is reformulated each year, but sometimes it fails to prevent the spread of the strains that are prevalent that year.

But about five years ago, researchers began identifying a new class of anti-influenza antibodies that are able to prevent infection by many, many strains of the virus. Instead of binding to the head of the influenza virus, as most flu-fighting antibodies do, these new antibodies target the stalk that holds up the head.

And while the head is highly adaptable—meaning that even when mutations occur there, the virus can often remain functional—the stalk must basically remain the same in order for the virus to survive. So these stalk antibodies are very hard for the virus to mutate against.

Protects year after year

In 2013, the Baltimore group stitched the genes for two of these new antibodies into an AAV and showed that mice injected with the vector were protected against multiple flu strains, including all H1, H2, and H5 influenza strains tested.

This was even true of older mice and those without a properly functioning immune system—a particularly important finding considering that most deaths from the flu occur in the elderly and immunocompromised populations. The group reported its results in the journal Nature Biotechnology.

“We have shown that we can protect mice completely against flu using a kind of antibody that doesn’t need to be changed every year,” says Baltimore. “It is important to note that this has not been tested in humans, so we do not yet know what concentration of antibody can be produced by VIP in humans. However, if it works as well as it does in mice, VIP may provide a plausible approach to protect even the most vulnerable patients against epidemic and pandemic influenza.”

Malaria, hepatitis C, tuberculosis

Now that the Baltimore lab has shown VIP to be so effective, other groups from around the country have adopted the Caltech-developed technique to try to ward off malaria, hepatitis C, and tuberculosis.

In August, a team led by Johns Hopkins Bloomberg School of Public Health reported in the Proceedings of the National Academy of Sciences (PNAS) that as many as 70 percent of mice that they had injected by the VIP procedure were protected from infection with malaria by Plasmodium falciparum, the parasite that carries the most lethal of the four types of the disease.

A subset of mice in the study produced particularly high levels of the disease-fighting antibodies. In those mice, the immunization was 100 percent effective.

“This is also just a first-generation antibody,” says Baltimore, who was a coauthor on the PNAS study. “Knowing now that you can get this kind of protection, it’s worth trying to get much better antibodies, and I trust that people in the malaria field will do that.”

Most recently, a group led by researchers from Rockefeller University showed that three hepatitis-C-fighting antibodies delivered using VIP were able to protect mice efficiently from the virus. The results were published in the journal Science Translational Medicine.

The researchers also found that the treatment was able to temporarily clear the virus from mice that had already been infected. Additional work is needed to determine how to prevent the disease from relapsing.

Interestingly, though, the work suggests that the antibodies that are effective against hepatitis C, once it has taken root in the liver, may work by protecting uninfected liver cells from infection while allowing already infected cells to be cleared from the body.

An additional project is currently evaluating the use of VIP for the prevention of tuberculosis—a particular challenge given the lack of proven tuberculosis-neutralizing antibodies.

“When we started this work, we imagined that it might be possible to use VIP to fight other diseases, so it has been very exciting to see other groups adopting the technique for that purpose,” Baltimore says. “If we can get positive clinical results in humans with HIV, we think that would really encourage people to think about using VIP for these other diseases.”

The National Institute of Allergy and Infectious Disease, the Bill and Melinda Gates Foundation, the Caltech-UCLA Joint Center for Translational Medicine, and a Caltech Translational Innovation Partnership Award supported Baltimore’s research.

Source: Caltech

The post Will ‘VIP’ protect us from the flu and HIV and malaria? appeared first on Futurity.

Scientists use light to make mice asocial

Fri, 09/19/2014 - 06:18

Scientists have discovered antagonistic neuron populations in the mouse amygdala that control whether the animal engages in social behaviors or asocial repetitive self-grooming.

This discovery may have implications for understanding neural circuit dysfunctions that underlie autism in humans.

Humans with autism often show a reduced frequency of social interactions and an increased tendency to engage in repetitive solitary behaviors.

Autism has also been linked to dysfunction of the amygdala, a brain structure involved in processing emotions.

Social or asocial?

This discovery, which is like a “seesaw circuit,” was led by postdoctoral scholar Weizhe Hong in the laboratory of David J. Anderson, biology professor at Caltech and an investigator with the Howard Hughes Medical Institute. The work appears online in the journal Cell.

Related Articles On Futurity

“We know that there is some hierarchy of behaviors, and they interact with each other because the animal can’t exhibit both social and asocial behaviors at the same time. In this study, we wanted to figure out how the brain does that,” Anderson says.

Anderson and his colleagues discovered two intermingled but distinct populations of neurons in the amygdala, a part of the brain that is involved in innate social behaviors. One population promotes social behaviors, such as mating, fighting, or social grooming, while the other population controls repetitive self-grooming—an asocial behavior.

Interestingly, these two populations are distinguished according to the most fundamental subdivision of neuron subtypes in the brain: the “social neurons” are inhibitory neurons (which release the neurotransmitter GABA, or gamma-aminobutyric acid), while the “self-grooming neurons” are excitatory neurons (which release the neurotransmitter glutamate, an amino acid).

Light-sensitive neurons

To study the relationship between these two cell types and their associated behaviors, the researchers used a technique called optogenetics. In optogenetics, neurons are genetically altered so that they express light-sensitive proteins from microbial organisms. Then, by shining a light on these modified neurons via a tiny fiber optic cable inserted into the brain, researchers can control the activity of the cells as well as their associated behaviors.

Using this optogenetic approach, Anderson’s team was able to selectively switch on the neurons associated with social behaviors and those linked with asocial behaviors.

With the social neurons, the behavior that was elicited depended upon the intensity of the light signal. That is, when high-intensity light was used, the mice became aggressive in the presence of an intruder mouse.

When lower-intensity light was used, the mice no longer attacked, although they were still socially engaged with the intruder—either initiating mating behavior or attempting to engage in social grooming.

When the neurons associated with asocial behavior were turned on, the mouse began self-grooming behaviors such as paw licking and face grooming while completely ignoring all intruders. The self-grooming behavior was repetitive and lasted for minutes even after the light was turned off.

The researchers could also use the light-activated neurons to stop the mice from engaging in particular behaviors. For example, if a lone mouse began spontaneously self-grooming, the researchers could halt this behavior through the optogenetic activation of the social neurons. Once the light was turned off and the activation stopped, the mouse would return to its self-grooming behavior.

Surprisingly, these two groups of neurons appear to interfere with each other’s function: the activation of social neurons inhibits self-grooming behavior, while the activation of self-grooming neurons inhibits social behavior. Thus these two groups of neurons seem to function like a seesaw, one that controls whether mice interact with others or instead focus on themselves.

It was completely unexpected that the two groups of neurons could be distinguished by whether they were excitatory or inhibitory. “If there was ever an experiment that ‘carves nature at its joints,'” says Anderson, “this is it.”

Understanding the circuitry

This seesaw circuit, Anderson and his colleagues say, may have some relevance to human behavioral disorders such as autism.

“In autism,” Anderson says, “there is a decrease in social interactions, and there is often an increase in repetitive, sometimes asocial or self-oriented, behaviors”—a phenomenon known as perseveration. “Here, by stimulating a particular set of neurons, we are both inhibiting social interactions and promoting these perseverative, persistent behaviors.”

Studies from other laboratories have shown that disruptions in genes implicated in autism show a similar decrease in social interaction and increase in repetitive self-grooming behavior in mice, Anderson says.

However, the current study helps to provide a needed link between gene activity, brain activity, and social behaviors, “and if you don’t understand the circuitry, you are never going to understand how the gene mutation affects the behavior.”

Going forward, he says, such a complete understanding will be necessary for the development of future therapies.

But could this concept ever actually be used to modify a human behavior?

“All of this is very far away, but if you found the right population of neurons, it might be possible to override the genetic component of a behavioral disorder like autism, by just changing the activity of the circuits—tipping the balance of the see-saw in the other direction,” he says.

The Simons Foundation, the National Institutes of Health, and the Howard Hughes Medical Institute supported the work.

Source: Caltech

The post Scientists use light to make mice asocial appeared first on Futurity.

Working alone ‘together’ can be good motivation

Thu, 09/18/2014 - 09:13

The sense that you’re not the only one tackling a challenge—even if you’re physically alone—can increase motivation, say researchers.

As the new study notes, people undertake many activities in life on their own but with others in mind—a researcher writes a paper on a new medical treatment and knows that others are working on the same problem. A student writes an essay for class and understands that other students are writing their own essays.

When people feel they and others are working together on a difficult problem, does this increase motivation?

“Working with others affords enormous social and personal benefits,” writes Gregory Walton, an assistant professor of psychology at Stanford, in an article in the Journal of Experimental Psychology with coauthor Priyanka Carr, then a Stanford graduate student.

“Our research found that social cues that conveyed simply that other people treat you as though you are working together on a task—rather than that you are just as working on the same task but separately—can have striking effects on motivation,” says Walton.

Work because you want to

In five experiments, Carr and Walton found that these “cues of working together” increased “intrinsic motivation” as people work on their own. Intrinsic motivation refers to behaviors people want to do—what they enjoy and find intrinsically rewarding—not what they force themselves to do.

Related Articles On Futurity

First, participants met in small groups. Then, they went to separate rooms to work on their own on a challenging puzzle. People in the “psychologically together” category were told they would work on the puzzle “together” and that they would either write or receive a tip on the puzzle from another participant in the study. Later they received a tip ostensibly authored by another participant.

People in the “psychologically separate” category were simply told that each person would work on the puzzle—there was no mention of working “together.” And the tip they would write or receive would come from the researchers—who, of course, were not solving puzzles. They received the same tip content as those in the “psychologically together” category—but it did not come from people engaged in the task.

While all the participants worked on their own on the puzzle, the key difference was that one group was treated by peers as though they were working “together.” The rest thought they were working on the same thing as others but separately, or simply in parallel to them.

Motivation for challenges

As Walton says, “In our studies, people never actually worked together—they always worked on their own on a challenging puzzle. What we were interested in was simply the effects of the perceived social context.”

Their findings showed that when people were treated as though they were working together they:

  • Persisted 48 to 64 percent longer on a challenging task
  • Reported more interest in the task
  • Became less tired by having to persist on the task—presumably because they enjoyed it
  • Became more engrossed in the task and performed better on it
  • Finally, when people were encouraged to reflect on how their interest in the puzzle was relevant to their personal values and identity, people chose to do 53 percent more related tasks in a separate setting one to two weeks later.

“The results showed that simply feeling like you’re part of a team of people working on a task makes people more motivated as they take on challenges,” says Walton. Moreover, the results reflect an increase in motivation—not a sense of obligation, competition, or pressure to join others in an activity.

When group work isn’t great

Walton points out that the research does not suggest that group work is always or necessarily better as a means to motivate children in school or employees at work.

“Sometimes group work can have negative effects,” Walton says.

For example, if people feel obligated to work with others, if they feel their contributions will go unnoticed, or if they don’t have ownership over their work and contribution, then group work might not be productive.

“Our research shows that it is possible to create a spirit of teamwork as people take on challenging individual tasks—a feeling that we’re all in this together, working on problems and tasks—and that this sense of working together can inspire motivation,” he says.

Carr notes, “It is also striking that it does not take enormous effort and change to create this feeling of togetherness. Subtle cues that signal people are part of a team or larger effort ignited motivation and effort. Careful attention to the social context as people work and learn can help us unleash motivation.”

Source: Stanford University

The post Working alone ‘together’ can be good motivation appeared first on Futurity.

What makes one chimp kill another?

Thu, 09/18/2014 - 09:10

It’s a mystery why male chimps kill other adults, but a new study suggests the behavior may have little to do with human activities.

Few animals, other than humans, show deadly aggression, and the field of primatology has been divided as to what causes this behavior among primates: adaptive strategies or man-made changes.

The new study is unique, says Jill Pruetz, professor of anthropology at Iowa State University, because it includes data from every research site dedicated to observing chimps and bonobos, including her site in Fongoli, Senegal. The impact of the gold rush in parts of Senegal contributed to her thinking about human impact.

“You have people coming in disturbing parts of the habitat that are important for chimpanzees. In one village near another research site, the population went from 100 people to around 10,000.

“When you have a human influx like that the chimps don’t have much choice but to move. If they move into another chimp community’s home range, something is going to happen and not all the chimps are going to survive.”

Reproductive success

For the study, published in Nature, researchers analyzed data from 18 chimpanzee communities studied over the course of more than 400 years, which included 152 killings by chimps in 15 communities.

Related Articles On Futurity

The data shows that the most important predictors of violence are related to the species, the age and sex of the attackers and victims, and community membership and demography. The killing of adult chimps can also be an adaptive strategy to indirectly increase reproductive success, Pruetz says.

“For example, Ngogo is a huge chimp community in Uganda and they have a huge number of males. What they’ve seen there is more lethal events than any other site. The males of one community are able to increase their home range, and it’s thought that it translates into reproductive success for these males because they have better access to food and more females.”

Researchers looked at human activities that affect food supply, resulted in crowding because of deforestation, or caused disruption from disease or hunting, but did not find a significant impact on aggression. For example, the Ngogo site had abundant food and high forest productivity, but the highest rate of deadly aggression.

Data from the bonobo research sites included just one suspected killing, which further supported the adaptive strategies hypothesis.

“If chimpanzee violence results from human impacts, then presumably human impacts should induce violent behavior in bonobos as well, the authors write.” Understanding what factors contribute to aggressive behavior will help scientists working to protect the endangered species.

Deadly violence still rare

The number of chimpanzee killings far outnumbered those by bonobos. However, the rate of deadly violence is still a rare occurrence.

The chimps at her research site are not overly aggressive and she has only recently started to see deadly violence. The data collected for the study translates to about one lethal event every five years. The collaborative effort provided a significant sample size for researchers to conduct a broader analysis that they could not accomplish alone.

Pruetz wants to continue the collaboration to better understand the differences in aggression between West African and East African chimps. While human impact doesn’t account for deadly violence among chimps, significant human disturbance could cause some disruption that leads to lethal aggression, Pruetz says.

“To play the devil’s advocate, from the human impact perspective, you might say that the densities we see today reflect human impact,” Pruetz says.

“It’s really hard to account for the way that we have shaped chimpanzee communities in the last 100 years, but we know that chimps have drastically reduced in numbers.”

Source: Iowa State University

The post What makes one chimp kill another? appeared first on Futurity.

Tiniest galaxy is home to a monster black hole

Thu, 09/18/2014 - 08:08

A huge black hole has been discovered at the center of an ultra-compact galaxy—the smallest galaxy known to contain one.

The findings, published in Nature, suggest that other ultra-compact galaxies also may contain massive black holes—and that those galaxies may be the stripped remnants of larger galaxies that were torn apart during collisions with other galaxies.

This Hubble Space Telescope image compares the size of that galaxy to the gigantic NGC 4647 galaxy. (Courtesy: Michigan State)

Related Articles On Futurity

There has been much debate whether ultra-compact dwarf galaxies are born as jam-packed star clusters or if they are galaxies that get smaller because they have stars ripped away from them.

The discovery of this black hole, combined with the high galaxy mass and sun-like levels of elements found in the stars, favor the latter idea, says Jay Strader, assistant professor of physics and astronomy at Michigan State University.

The supermassive black hole found at the center of the galaxy known as M60-UCD1 is estimated to have a mass of 21 million suns. By comparison, the mass of the black hole found at the center of our Milky Way galaxy is only about 4 million suns.

The other interesting aspect of this finding is that it suggests supermassive black holes are more common in less-massive galaxies than previously thought, Strader says.

“This means that the ‘seeds’ of supermassive black holes are more likely to be something that occurred commonly in the early universe.”

It continues to be a matter of debate as to whether these black holes could instead have come about as a result of unusual “seeds,” such as “super” stars that collapse directly to massive black holes, or runaway stellar collisions that occurred in the core of a dense star cluster.

The National Science Foundation, the German Research Foundation, and the Gemini Telescope partnership, which includes the NSF and scientific agencies in Canada, Chile, Australia, Brazil, and Argentina, funded the research.

Source: Michigan State University

The post Tiniest galaxy is home to a monster black hole appeared first on Futurity.

Brain linkups lag in kids with ADHD

Thu, 09/18/2014 - 07:50

Kids and teens with ADHD lag behind others of the same age in how quickly their brains form connections within, and between, key brain networks.

The lag in connection development may help explain why people with ADHD get easily distracted or struggle to stay focused.

Scientists say this key difference in brain architecture could allow doctors to use brain scans to diagnose ADHD—and track how well someone responds to treatment.

Related Articles On Futurity

This kind of neuroimaging “biomarker” doesn’t yet exist for ADHD, or any psychiatric condition for that matter.

Researchers made the discovery after examining the brain scans of 275 kids and teens with ADHD, and 481 others without it, using “connectomic” methods that can map interconnectivity between networks in the brain.

The scans, made using function magnetic resonance imaging (fMRI) scanners, show brain activity during a resting state. This allows researchers to see how a number of different brain networks, each specialized for certain types of functions, were “talking” within and amongst themselves.

The researchers found lags in development of connection within the internally focused network, called the default mode network or DMN, and in development of connections between DMN and two networks that process externally focused tasks, often called task-positive networks, or TPNs.

They could even see that the lags in connection development with the two task-related networks—the frontoparietal and ventral attention networks—were located primarily in two specific areas of the brain.

results are published in the Proceedings of the National Academy of Sciences.

Why some people ‘grow out’ of ADHD

The new findings mesh well with what other researchers have found by examining the physical structure of the brains of people with and without ADHD in other ways.

Such research has already shown alterations in regions within DMN and TPNs. So, the new findings build on that understanding and add to it.

The findings are also relevant to thinking about the longitudinal course of ADHD from childhood to adulthood. For instance, some children and teens “grow out” of the disorder, while for others the disorder persists throughout adulthood.

Future studies of brain network maturation in ADHD could shed light into the neural basis for this difference.

Lead researcher Chandra Sripada, an assistant professor and psychiatrist at the University of Michigan, explains that in the last decade, functional medical imaging has revealed that the human brain is functionally organized into large-scale connectivity networks.

These networks, and the connections between them, mature throughout early childhood all the way to young adulthood.

“It is particularly noteworthy that the networks we found to have lagging maturation in ADHD are linked to the very behaviors that are the symptoms of ADHD,” Sripada says.

Autism, too?

Studying the vast array of connections in the brain, a field called connectomics, requires scientists to parse through not just the one-to-one communications between two specific brain regions, but the patterns of communication among thousands of nodes within the brain. This requires major computing power and access to massive amounts of data—which makes the open sharing of fMRI images so important.

“The results of this study set the stage for the next phase of this research, which is to examine individual components of the networks that have the maturational lag,” he says. “This study provides a coarse-grained understanding, and now we want to examine this phenomenon in a more fine-grained way that might lead us to a true biological marker, or neuromarker, for ADHD.”

Sripada also notes that connectomics could be used to examine other disorders with roots in brain connectivity—including autism, which some evidence has suggested stems from over-maturation of some brain networks, and schizophrenia, which may arise from abnormal connections. Pooling more fMRI data from people with these conditions, and depression, anxiety, bipolar disorder and more could boost connectomics studies in those fields.

Research volunteers needed

To develop such a neuromarker, Sripada has embarked on follow-up research. One study is enrolling children between the ages of 7 and 17 who have ADHD and a comparison group of those without it.

Another study is enrolling adults between the ages of 18 and 35 who have ADHD and a comparison group of those without it. Of note, fMRI scans do not expose a person to radiation. Anyone interested in these studies can email Psych-study@med.umich.edu or call (734) 232-0353; for the study of children, parents should make the contact and consent to research on behalf of their children.

The National Institutes of Health grant, a UMCCMB pilot grant, and the John Templeton Foundation funded the project.

Source: University of Michigan

The post Brain linkups lag in kids with ADHD appeared first on Futurity.

How to build global cities without so many cars

Thu, 09/18/2014 - 06:34

Expanding public transportation, walking, and biking in cities could save more than $100 trillion in public and private spending between now and 2050.

A new report shows that this shift also would result in reductions in carbon dioxide emissions reaching 1,700 megatons a year in 2050.

If governments require the strongest vehicle pollution controls and ultralow-sulfur fuels, 1.4 million early deaths associated with exposure to vehicle tailpipe emissions could be avoided each year, according to a related analysis by the International Council on Clean Transportation included in the report.

Doubling motor vehicle fuel economy could reduce CO2 emissions by an additional 700 megatons in 2050.

View larger. (Credit: UC Davis)

“The study shows that getting away from car-centric development, especially in rapidly developing economies, will cut urban CO2 dramatically and also reduce costs,” says report coauthor Lew Fulton, co-director of NextSTEPS Program at the University of California, Davis Institute of Transportation Studies.

“It is also critical to reduce the energy use and carbon emissions of all vehicles.”

Cleaner options

The report is the first study to examine how major changes in transportation investments worldwide would affect urban passenger transport emissions as well as the mobility of different income groups. The findings should help support wider agreement on climate policy, where cleanup costs and equity between rich and poor countries are key issues.

Related Articles On Futurity

The report is available in advance of September 23 United Nations Secretary-General’s Climate Summit, where many nations and corporations will announce voluntary commitments to reduce greenhouse gas emissions, including new efforts focused on sustainable transportation.

“Transportation, driven by rapid growth in car use, has been the fastest growing source of CO2 in the world,” says Michael Replogle, coauthor of the study and managing director for policy at ITDP, a global New York-based nonprofit.

“An affordable but largely overlooked way to cut that pollution is to give people clean options to use public transportation, walking, and cycling. This expands mobility options, especially for the poor, and curbs air pollution from traffic.”

The authors calculated CO2 emissions and costs from 2015 to 2050 under a business-as-usual scenario and a “High Shift” scenario where governments significantly increase investments in rail and clean bus transportation, and provide infrastructure to ensure safe walking, bicycling, and other active forms of transportation.

It also includes moving investments away from road construction, parking garages, and other steps that encourage car ownership, freeing up resources for the needed investments.

Cities around the world

Transportation in urban areas accounted for about 2,300 megatons of CO2 in 2010, almost one quarter of carbon emissions from the transportation sector. Rapid urbanization—especially in fast developing countries like China and India—will cause these emissions to double by 2050 in the business-as-usual scenario. Among the countries examined in the study, three stand out:

United States

Currently the world leader in urban passenger transportation CO2 emissions, the US is projected to lower these emissions from 670 megatons annually to 560 megatons by 2050 because of slowing travel growth combined with sharp improvements in fuel efficiencies.

But a high shift to more sustainable transportation options, along with fewer and shorter car trips related to communication technologies substituting for transportation, could further drop those emissions to about 280 megatons.

China

CO2 emissions from transportation are expected to mushroom from 190 megatons annually to more than 1,100 megatons, due in large part to the explosive growth of China’s urban areas, the growing wealth of Chinese consumers, and their dependence on automobiles.

But this increase can be slashed to 650 megatons under the High Shift scenario, in which cities develop extensive clean bus and metro systems. The latest data show China is already sharply increasing investments in public transport.

India

CO2 emissions are projected to leap from about 70 megatons today to 540 megatons by 2050, also because of growing wealth and urban populations. But this increase can be moderated to only 350 megatons under the High Shift scenario by addressing crucial deficiencies in India’s public transport.

Under the High Shift scenario, mass transit access worldwide is projected to more than triple for the lowest income groups and more than double for the second lowest groups. This would provide the poor with better access to employment and services that can improve their livelihoods.

The Ford Foundation, ClimateWorks Foundation, and Hewlett Foundation supported the work.

Source: UC Davis

The post How to build global cities without so many cars appeared first on Futurity.

Can yoga play a role in treating bipolar?

Thu, 09/18/2014 - 06:05

A recent survey suggests that yoga can be a substantial help for people with bipolar disorder, though the practice isn’t without risks.

“There is no scientific literature on hatha yoga for bipolar disorder,” says lead author Lisa Uebelacker, associate professor (research) of psychiatry and human behavior in the Alpert Medical School of Brown University and a staff psychologist at Butler Hospital.

Hatha yoga is the practice, familiar in the West, in which people move between various poses. It often includes breathing practices and meditation.

Related Articles On Futurity

“There is reason to think that there are ways in which it might be wonderful and ways in which it might not be safe. We are interested in studying hatha yoga for bipolar as an adjunctive treatment to pharmacotherapy.”

The preponderance of responses from more than 70 people who answered the study’s online survey were that yoga has benefits for people with bipolar disorder.

When asked, “What impact do you think yoga has on your life?” the vast majority of responses were positive and about one in five respondents characterized yoga as “life changing.” One even said, “Yoga has saved my life. … I might not be alive today were it not for yoga.”

Twenty-nine other respondents said yoga decreased anxiety and promoted calm or provided other emotional benefits. Calm also emerged as a specific benefit for 23 survey respondents when asked how yoga affects mania symptoms.

Other benefits that were mentioned repeatedly included distraction from depressive thoughts and increased clarity of thought.

“There is clearly evidence that yoga seems to be a powerful practice for some individuals with BD,” the researchers write in the paper. “It was striking that some of our respondents clearly believed that yoga had a major positive impact on their lives.”

Heat and breathing risks

Throughout the survey there was also evidence that yoga could be problematic for some people with BD, although fewer people cited problems.

In response to survey questions about whether yoga has had a negative impact, for example, five respondents cited cases in which rapid or energetic breathing made them feel agitated. Another became too relaxed after a slow, meditative practice: “I fell into a relaxed state … near catatonic as my mind was depressed already. I was in bed for three days afterward.”

And like some people in general who practice yoga, 11 respondents warned that there is the potential for physical injury or pain. Another four said they became self-critical or frustrated with their performance sometimes during yoga.

“It’s possible that you want to avoid any extreme practice, such as extended periods of rapid breathing,” Uebelacker says.

The survey results also raise some concerns about heated yoga, which is consistent with evidence that the use of certain medications for bipolar disorder, including lithium and antipsychotic medications, are associated with possible heat intolerance and resulting symptoms of physical illness.

The results appear in the Journal of Psychiatric Practice.

Pilot clinical trial coming up

The online survey is the first stage in a research program that Uebelacker, who has spent several years studying yoga for unipolar depression, and colleague Lauren Weinstock, an expert in bipolar disorder, are developing to examine yoga for bipolar disorder.

They now have a grant from the Depressive and Bipolar Disorder Alternative Treatment Foundation to run a pilot clinical trial in which they will compare outcomes from yoga to outcomes from using a well-regarded workbook for bipolar disorder.

Those results could set the stage for a larger trial with enough statistical power to rigorously identify benefits and risks, Uebelacker says.

For many bipolar patients, symptoms persist for decades despite multiple medications. The current studies of yoga, Uebelacker says, are part of a broader program at Butler and Brown to determine what else can help people who are already undergoing conventional therapies.

“We’re looking at alternative ways to cope with suffering that is part of people’s everyday lives so that there are other options in addition to ongoing medication and psychotherapy” Uebelacker says.

As their research continues, they will learn what role hatha yoga might play.

Source: Brown University

The post Can yoga play a role in treating bipolar? appeared first on Futurity.

Do gut bacteria make flu shots work better?

Wed, 09/17/2014 - 12:28

People treated with an antibiotic before or while receiving a flu shot may have a weakened response to the vaccine, according to a new study with mice.

The research shows that mice treated with antibiotics to remove most of their intestinal bacteria or raised under sterile conditions have weakened antibody responses to the seasonal influenza vaccination.

Related Articles On Futurity

The findings, published in the journal Immunity, demonstrate a dependency on gut bacteria for strong immune responses to the seasonal flu and inactivated polio vaccines, and may also help explain why immunity induced by some vaccines varies in different parts of the world.

Antibody responses to vaccines containing immune stimulating substances called adjuvants were not affected by a lack of gut bacteria. For example, bacteria were not critical for responses to the Tdap (Tetanus-Diphtheria-Pertussis) vaccine.

“Our results suggest that the gut microbiome may be exerting a powerful effect on immunity to vaccination in humans, even immunity induced by a vaccine that is given at a distant site,” says Bali Pulendran, professor of pathology and laboratory medicine at Emory University School of Medicine and Yerkes National Primate Research Center.

The impetus for the research was a previous study involving an analysis of the immune response to influenza vaccination in humans, using the “systems vaccinology” approach, Pulendran says.

Critical ingredient: Flagella

Researchers had observed that in humans given the flu vaccine, the expression of the gene encoding TLR5 a few days after vaccination was correlated with strong antibody responses weeks later. TLR5 encodes a protein that enables immune cells to sense flagellin, the main structural protein for the whips (flagella) many bacteria use to propel themselves.

The ability of immune cells to sense flagellin appears to be the critical component affecting vaccine responses, the study shows. Mice lacking TLR5—but still colonized with bacteria—have diminished responses to flu vaccines, similar to antibiotic-treated or germ-free mice.

Oral reconstitution of antibiotic treated mice with bacteria containing flagellin, but not with mutant bacteria lacking flagellin, could restore the diminished antibody response.

“These results demonstrate an important role for gut bacteria in shaping immunity to vaccination, and raise the possibility that the microbiome could be harnessed to modulate vaccine efficacy,” says Pulendran. “The key question is the extent to which this impacts protective immunity in humans.”

Pulendran says that his team is planning a study in humans to address this issue.

The first author of the paper is postdoctoral fellow Jason Oh. Researchers at Georgia State University and University of North Carolina contributed to the paper.

The National Institute for Allergy and Infectious Diseases and the National Institute of Diabetes, Digestive, and Kidney Diseases supported the study.

Source: Emory University

The post Do gut bacteria make flu shots work better? appeared first on Futurity.

Fruit fly eyes offer clues to human cancer

Wed, 09/17/2014 - 10:50

Scientists are using the eyes of fruit flies to better understand the human retinoblastoma protein gene, in which mutations are a leading cause of eye cancer.

In the current issue of the Journal of Biological Chemistry, researchers provide the first detailed examination of a set of mutations similar to those present in the human cancer gene, says study coauthor Irina Pushel, an undergraduate at Michigan State University.

“By systematically evaluating mutations of increasing severity, we now have a model to better predict how we think the protein will react with each mutation,” says Pushel.

“We’re trying to understand the protein, not even in the specific context of cancer, but rather studying how it interacts within the cell, how it interacts with DNA.”

Related Articles On Futurity

The protein, retinoblastoma, would appear to play a key role in everything. When it’s healthy, it helps control cell growth and development. Without it, the organism would die. In its abnormal state cells can overgrow, as seen in cancer, or undergo premature death, as in other human diseases.

“If we find one of these mutations in a human, then we can predict what will happen with the protein, such as folding incorrectly,” says Pushel.

“This isn’t going to immediately lead to a new drug to treat cancer. However, we have to know how the protein works before we can develop a drug to fix it. Future medicines will be built upon models such as this, though that is years away.”

Previous work has shown that a specific part of this protein plays a role in regulating other genes. In this study, the team modified some of the known important parts of this region of retinoblastoma.

Boosting levels of even standard, or wild-type, protein altered fruit flies eyes and wings. However, when levels of the mutated protein began to climb, deformations were consistent and dramatic.

While a cancer treatment based on this finding may be years away, the insight and understanding into cell development and gene regulation is immediate, Pushel says.

“That’s the cool thing about basic research; it may not lead directly to the creation of a new drug, but it helps decipher the genetic code, which for each person controls the unique pattern of how they grow and how they develop—that’s amazing,” she says.

“It will have many impacts, from understanding development to personalized medicine.”

Additional coauthors of the paper are Liang Zhang, lead author and graduate student, and Bill Henry and David Arnosti, molecular biologists.

Source: Michigan State University

The post Fruit fly eyes offer clues to human cancer appeared first on Futurity.

Brain ‘node’ causes deep sleep without sedative

Wed, 09/17/2014 - 07:49

Scientists have identified a second “sleep node” in the mammalian brain whose activity appears to be both necessary and sufficient to produce deep sleep.

The sleep-promoting circuit located deep in the primitive brainstem reveals how we fall into deep sleep.

Published online in Nature Neuroscience, the study demonstrates that fully half of all of the brain’s sleep-promoting activity originates from the parafacial zone (PZ) in the brainstem.

The brainstem is a primordial part of the brain that regulates basic functions necessary for survival, such as breathing, blood pressure, heart rate, and body temperature.

“The close association of a sleep center with other regions that are critical for life highlights the evolutionary importance of sleep in the brain,” says study coauthor Caroline E. Bass, assistant professor of pharmacology and toxicology in the University at Buffalo School of Medicine and Biomedical Sciences.

Better brain control

The researchers found that a specific type of neuron in the PZ that makes the neurotransmitter gamma-aminobutyric acid (GABA) is responsible for deep sleep. They used a set of innovative tools to precisely control these neurons remotely, in essence giving them the ability to turn the neurons on and off at will.

Related Articles On Futurity

“These new molecular approaches allow unprecedented control over brain function at the cellular level,” says Christelle Ancelet, postdoctoral fellow at Harvard School of Medicine.

“Before these tools were developed, we often used ‘electrical stimulation’ to activate a region, but the problem is that doing so stimulates everything the electrode touches and even surrounding areas it didn’t. It was a sledgehammer approach, when what we needed was a scalpel.”

“To get the precision required for these experiments, we introduced a virus into the PZ that expressed a ‘designer’ receptor on GABA neurons only but didn’t otherwise alter brain function,” explains Patrick Fuller, assistant professor at Harvard and senior author of the paper.

“When we turned on the GABA neurons in the PZ, the animals quickly fell into a deep sleep without the use of sedatives or sleep aids.”

Sleep disorder treatment

How these neurons interact in the brain with other sleep and wake-promoting brain regions still need to be studied, the researchers say, but eventually these findings may translate into new medications for treating sleep disorders, including insomnia, and the development of better and safer anesthetics.

“We are at a truly transformative point in neuroscience,” says Bass, “where the use of designer genes gives us unprecedented ability to control the brain.

“We can now answer fundamental questions of brain function, which have traditionally been beyond our reach, including the ‘why’ of sleep, one of the more enduring mysteries in the neurosciences.”

The National Institutes of Health funded the work.

Source: University at Buffalo

The post Brain ‘node’ causes deep sleep without sedative appeared first on Futurity.


« Back