A simple blood test may be a reliable way to screen people for suicide risk. The test looks for changes in a gene that helps the brain manage stress and control impulsive behavior.
“Suicide is a major preventable public health problem, but we have been stymied in our prevention efforts because we have no consistent way to predict those who are at increased risk of killing themselves,” says Zachary Kaminsky, assistant professor of psychiatry and behavioral sciences at Johns Hopkins University School of Medicine.
“With a test like ours, we may be able to stem suicide rates by identifying those people and intervening early enough to head off a catastrophe.”Mutated gene
In a series of experiments described online in the American Journal of Psychiatry, researchers focused on a mutation in a gene known as SKA2. Looking at brain samples from mentally ill and healthy people, they found that SKA2′s production of proteins was significantly reduced in people who had died by suicide.
Within this common mutation, they then found in some subjects an epigenetic modification that altered the way the SKA2 gene functioned without changing the gene’s underlying DNA sequence.
The modification added chemicals called methyl groups to the gene. Higher levels of methylation were then found in the study subjects who had killed themselves and in suicide victims in two other collections of brain tissue.
The researchers also tested three different sets of blood samples, the largest one involving 325 participants in the Johns Hopkins Center for Prevention Research Study. They found similar SKA2 methylation increases in individuals with suicidal thoughts or attempts. They then designed a model analysis of blood samples that predicted with 80 to 96 percent accuracy which participants were experiencing suicidal thoughts or had attempted suicide.Cortisol-suicide link
The SKA2 gene makes proteins in the prefrontal cortex of the brain, which is involved in inhibiting negative thoughts and controlling impulsive behavior. SKA2 is specifically responsible for chaperoning stress hormone receptors into cells’ nuclei so they can do their job.
If there isn’t enough SKA2 protein, or it is altered in some way, the stress hormone receptor is unable to suppress the release of cortisol throughout the brain. Previous research has shown that cortisol release is abnormally high in people who attempt or die by suicide.
A test based on these findings might be used to predict future suicide attempts in those who are ill, to restrict lethal means or methods among those at risk, or to make decisions regarding the intensity of intervention approaches, Kaminsky says.
“We have found a gene that we think could be really important for consistently identifying a range of behaviors from suicidal thoughts to attempts to completions. We need to study this in a larger sample but we believe that we might be able to monitor the blood to identify those at risk of suicide.”Soldiers at risk
It might make sense for the military to test whether soldiers have the gene mutation that makes them more vulnerable, Kaminsky says. Those at risk could be more closely monitored when they returned home after deployment. A test could also be useful in a psychiatric emergency room, he says, as part of a suicide risk assessment when doctors try to assess level of suicide risk.
The test could be used in all sorts of safety assessment decisions like the need for hospitalization and closeness of monitoring. Another possible use that needs more study could be to inform treatment decisions, such as whether or not to give certain medications that have been linked with suicidal thoughts.
The National Institute of Mental Health, the Center for Mental Health Initiatives, the James Wah Award for Mood Disorders, and the Solomon R. and Rebecca D. Baker Foundation supported the study.
Source: Johns Hopkins University
By analyzing the brainwaves of 16 people as they watched mainstream television, researchers were able to accurately predict the preferences of large TV audiences, up to 90 percent in the case of Super Bowl commercials.
“Alternative methods such as self-reports are fraught with problems as people conform their responses to their own values and expectations,” says Jacek Dmochowski, lead author of the paper and a postdoctoral fellow at City College of New York (CCNY) when the study was under way.
However, brain signals measured using electroencephalography (EEG) can, in principle, alleviate this shortcoming by providing immediate physiological responses immune to such self-biasing.
“Our findings show that these immediate responses are in fact closely tied to the subsequent behavior of the general population,” he adds. The findings appear in Nature Communications.
Lucas Parra, professor of biomedical engineering at CCNY and the paper’s senior author explains that, “when two people watch a movie, their brains respond similarly—but only if the video is engaging. Popular shows and commercials draw our attention and make our brainwaves very reliable; the audience is always ‘in-sync.’”Brainwaves and tweets
In the study, participants watched scenes from “The Walking Dead” TV show and several commercials from the 2012 and 2013 Super Bowls. EEG electrodes on their heads captured brain activity.Related Articles On Futurity
- Indiana UniversityMean girls rule on kids' TV
- University of MelbourneOur brains learn to love music's harmony
- University of PittsburghOur noses sense 10 types of odors
The reliability of the recorded neural activity was then compared to audience reactions in the general population using publicly available social media data provided by the Harmony Institute and ratings from USA Today’s Super Bowl Ad Meter.
“Brain activity among our participants watching ‘The Walking Dead’ predicted 40 percent of the associated Twitter traffic,” says Parra. “When brainwaves were in agreement, the number of tweets tended to increase.” Brainwaves also predicted 60 percent of the Nielsen ratings that measure the size of a TV audience.
The study was even more accurate (90 percent) when comparing preferences for Super Bowl ads. For instance, researchers saw very similar brainwaves from their participants as they watched a 2012 Budweiser commercial that featured a beer-fetching dog.
The general public voted the ad as their second favorite that year. The study found little agreement in the brain activity among participants when watching a GoDaddy commercial featuring a kissing couple. It was among the worst rated ads in 2012.How the brain responds
The CCNY researchers collaborated with Matthew Bezdek and Eric Schumacher from Georgia Tech to identify which brain regions are involved and explain the underlying mechanisms.
Using functional magnetic resonance imaging (fMRI), they found evidence that brainwaves for engaging ads could be driven by activity in visual, auditory, and attention brain areas.
“Interesting ads may draw our attention and cause deeper sensory processing of the content,” says Bezdek, a postdoctoral researcher at Georgia Tech’s School of Psychology.
Apart from applications to marketing and film, Parra is investigating whether this measure of attentional draw can be used to diagnose neurological disorders such as attention deficit disorder or mild cognitive decline.
Another potential application is to predict the effectiveness of online educational videos by measuring how engaging they are.
Source: Georgia Tech
Children diagnosed with depression in preschool are 2.5 times more likely to have the condition in elementary and middle school, report researchers.
“It’s the same old bad news about depression; it is a chronic and recurrent disorder,” says child psychiatrist Joan L. Luby, who directs Washington University’s Early Emotional Development Program.
“But the good news is that if we can identify depression early, perhaps we have a window of opportunity to treat it more effectively and potentially change the trajectory of the illness so that it is less likely to be chronic and recurring.”Related Articles On Futurity
- Texas A&M UniversityTake the test: Will your home help baby learn?
- Michigan State UniversityBrand knowledge linked to higher BMI in kids
- McGill UniversityEverybody's working for the 'weekend effect'
The investigators followed 246 children, now ages 9 to 12, who were enrolled in the study as preschoolers when they were 3 to 5 years old. The children and their primary caregivers participated in up to six annual and four semiannual assessments. They were screened using a tool called the Preschool Feelings Checklist, developed by Luby and her colleagues, and evaluated using an age-appropriate diagnostic interview.
As part of the evaluation, caregivers were interviewed about their children’s expressions of sadness, irritability, guilt, sleep, appetite, and decreased pleasure in activity and play.
In addition, researchers used two-way mirrors to evaluate child-caregiver interactions because the team’s earlier research had shown that a lack of parental nurturing is an important risk factor for recurrence of depression.
The study was designed to follow children as they grew and to evaluate them for depression and other psychiatric conditions. However, if children were found to be seriously depressed or in danger of self harm, or if their caregivers requested treatment, they were referred to mental health providers.
Currently, there are no proven treatments for depression that arises in the preschool years. Even in depressed adults, available treatments and medications are effective only about half the time.Depressed moms and conduct disorders
At the start of the study, 74 of the children were diagnosed with depression. When the researchers evaluated the same group six years later, they found that 79 children met the full criteria for clinical depression based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V). This manual contains the American Psychiatric Association’s most up-to-date official guidelines for diagnosing and treating psychiatric illnesses.
More than 51 percent of the 74 children who originally were diagnosed as preschoolers also were depressed as school-age kids. Only 24 percent of the 172 children who were not depressed as preschoolers went on to develop depression during their elementary and middle school years.
Luby’s group also found that school-age children had a high risk of depression if their mothers were depressed. And they noted that children diagnosed with a conduct disorder as preschoolers had an elevated risk of depression by school age and early adolescence, but this risk declined if the children were found to have significant maternal support.
But neither a mother with depression nor a conduct disorder in preschool increased the risk for later depression as much as a diagnosis of depression during preschool years.
“Preschool depression predicted school-age depression over and above any of the other well-established risk factors,” Luby explains. “Those children appear to be on a trajectory for depression that’s independent of other psychosocial variables.”What can doctors do?
Luby says her findings continue to contradict doctors and scientists who have maintained that children as young as three or four could not be clinically depressed. She advocates including depression screenings in regular medical checkups for preschoolers, but she says such monitoring is unlikely to begin anytime soon.
“The reason it hasn’t yet become a huge call to action is because we don’t yet have any proven, effective treatments for depressed preschoolers,” she explains. “Pediatricians don’t usually want to screen for a condition if they can’t then refer patients to someone who can help.”
Luby now is testing potential parent-child psychotherapies that appear promising for preschoolers with depression, but it’s too early to determine whether they work.
Her team also will continue following this group of children through puberty to determine whether depression during preschool remains a risk factor for depression during young adulthood.
The National Institute on Mental Health of the National Institutes of Health, the CHADS Coalition, and the Sidney Baer Foundation supported the work. The study appears in The American Journal of Psychiatry.
Many people who undergo preventative computerized tomography (CT) lung screenings receive positive results on the screening test, only to find out that they’re actually cancer-free.
The US Preventive Services Task Force recently recommended this CT lung screening for people at high risk for cancer. Many policymakers have expressed concern that this high false-positive rate will cause patients to become needlessly upset.
A new study of National Lung Screening Trial participant responses to false positive diagnoses, however, finds that those who received false positive screening results did not report increased anxiety or lower quality of life compared with participants who received negative screen results.
“Most people anticipated that participants who were told that they had a positive screen result would experience increased anxiety and reduced quality of life. However, we did not find this to be the case,” says Ilana Gareen, assistant professor (research) of epidemiology in the Brown University School of Public Health and lead author of the study in the journal Cancer.
The NLST’s central finding, announced in 2010, was that screening with helical CT scans reduced lung cancer deaths by 20 percent compared to screening with chest X-rays. The huge trial spanned more than a decade, enrolling more than 53,000 smokers at 33 sites.Warn about false positive results
In the new study, Gareen and coauthors followed up with a subset of participants at 16 sites to assess the psychological effects of the CT and X-ray screenings compared in the trial.Related Articles On Futurity
- University of RochesterRunny-nosed kids make your cold worse
- Duke UniversityCell metabolism linked to deadly tumors
- California Institute of TechnologyNanoparticles ferry interfering RNA into tumors
“In the context of our study, with the consent process that we used, we found no increased anxiety or decreased quality of life at one or six months after screening for participants having a false positive,” Gareen says.
“What we expected was that there would be increased anxiety and decreased quality of life at one month and that these symptoms would subside by six months, which is why we measured at both time points, but we didn’t find any changes at either time point.”
The unexpected similarity between the participants with a negative and a false positive screen result is not because getting a false positive diagnosis is at all pleasant, Gareen says, but presumably because study participants understood that there was a high likelihood of a false positive screen result.
“We think that the staff at each of the NLST sites did a very good job of providing informed consent to our participants,” she says. “In advance of any screening, participants were advised that 20 to 50 percent of those screened would receive false positive results, and that the participants might require additional work-up to confirm that they were cancer-free.”After the screenings
To make its assessments, Gareen’s team surveyed 2,812 NLST participants for the study. Patients responded well, with 2,317 returning the survey at one month after screening and 1,990 returning the survey at six months.
The survey included two standardized questionnaires: the 36-question Short Form SF-36, which elicits self-reports of general physical and mental health quality, and the 20-question Spielberger State Trait Anxiety Inventory.
Maryann Duggan and her staff from the Outcomes and Economics Assessment Unit at Brown administered the questionnaires by mail with telephone follow-up as required.
In the study analysis, the researchers divided people into groups based on their ultimate accurate diagnoses: 1,024 participants were “false positive,” 63 were “true positive,” 1,381 were “true negative,” and 344 had a “significant incidental finding,” meaning they didn’t have cancer but instead had another possible problem of medical importance.
The results were clear after statistical adjustment for factors that could have had a confounding influence. Whether participants received X-rays or the helical CT scans, the questionnaire scores of those with false positive diagnoses remained similar to those who were given true negative diagnoses.
Meanwhile, the scores of the true positive participants who were diagnosed with lung cancer markedly worsened over time as their battle with the disease took a physical and psychological toll.How to advise patients
Because participants received the questionnaires at one and six months, it is possible that study participants receiving a false positive screen result experienced anxiety and reduced quality of life for a short time after receiving their screen result, Gareen says. But by one month after their screening, there was no evidence of a difference between the screen result groups.
Gareen says the results should encourage physicians to recommend appropriate screenings, despite their high false positive rates, so long as patients are properly informed of the likelihood of a positive screen result and its implications. The data provide evidence that the NLST consent process provided a good model for advising those undergoing screening, she says.
In addition to those at Brown, the other authors contributed from Harvard Medical School and the University of Wisconsin.
The National Cancer Institute funded the NLST, including this study, which was conducted by the NCI-funded American College of Radiology (ACRIN), now part of the ECOG-ACRIN Cancer Research Group.
Source: Brown University
The post Can patients with cancer risk handle false positives? appeared first on Futurity.
Anxious mother rats give off an odor that teaches their newborn babies to be afraid.
Researchers studied mother rats who had learned to fear the smell of peppermint and saw them teach this fear to their babies in their first days of life by using an alarm odor that is released during distress.
The scientists pinpointed the specific area of the brain where this fear transmission takes root in the earliest days of life. Their findings in animals may help explain a phenomenon that has puzzled mental health experts for generations: how a mother’s traumatic experience can affect her children in profound ways, even when an event happened long before the children were born.
The researchers say they hope their work will lead to a better understanding of why all children of traumatized mothers, or of mothers with major phobias, other anxiety disorders, or major depression, don’t experience the same effects.Mothers’ memories
“During the early days of an infant rat’s life, they are immune to learning information about environmental dangers. But if their mother is the source of threat information, we have shown they can learn from her and produce lasting memories,” says Jacek Debiec, assistant professor of child and adolescent psychiatry.
“Our research demonstrates that infants can learn from maternal expression of fear, very early in life. Before they can even make their own experiences, they basically acquire their mothers’ experiences. Most importantly, these maternally-transmitted memories are long-lived, whereas other types of infant learning, if not repeated, rapidly perish.”
Research with rats allows scientists to see what’s going on inside the brain during fear transmission, in ways they could never do in humans, says Debiec, who began his research during a fellowship at New York University with Regina Marie Sullivan, senior author of the new paper that is published in the Proceedings of the National Academy of Sciences.Peppermint fears
Researchers taught female rats to fear the smell of peppermint by exposing them to mild, unpleasant electric shocks while they smelled the scent, before they were pregnant. Then after they gave birth, the team exposed the mothers to just the minty smell, without the shocks, to provoke the fear response. They also used a comparison group of female rats that didn’t fear peppermint.
They exposed the pups of both groups of mothers to the peppermint smell, under many different conditions with and without their mothers present.
Using special brain imaging, and studies of genetic activity in individual brain cells and cortisol in the blood, they zeroed in on a brain structure called the lateral amygdala as the key location for learning fears. During later life, this area is key to detecting and planning response to threats—so it makes sense that it would also be the hub for learning new fears.
But the fact that these fears could be learned in a way that lasted, during a time when the baby rat’s ability to learn any fears directly was naturally suppressed, is what makes the new findings so interesting, says Debiec.Mother’s scent
The newborns could learn their mothers’ fears even when the mothers weren’t present. Just the piped-in scent of their mother reacting to the peppermint odor she feared was enough to make them fear the same thing.
And when the researchers gave the baby rats a substance that blocked activity in the amygdala, they failed to learn the fear of peppermint smell from their mothers. This suggests that there may be ways to intervene to prevent children from learning irrational or harmful fear responses from their mothers, or reduce their impact.
The new research builds on what scientists have learned over time about the fear circuitry in the brain, and what can go wrong with it. That work has helped psychiatrists develop new treatments for human patients with phobias and other anxiety disorders—for instance, exposure therapy that helps them overcome fears by gradually confronting the thing or experience that causes their fear.
In much the same way, Debiec hopes that exploring the roots of fear in infancy, and how maternal trauma can affect subsequent generations, could help human patients. While it’s too soon to know if the same odor-based effect happens between human mothers and babies, the role of a mother’s scent in calming human babies has been shown.
Debiec, who hails from Poland, recalls working with the grown children of Holocaust survivors, who experienced nightmares, avoidance instincts, and even flashbacks related to traumatic experiences they never had themselves. While they would have learned about the Holocaust from their parents, this deeply ingrained fear suggests something more at work, he says.
Going forward, he hopes to observe human infants and their mothers, and also work with military families.
The National Institutes of Health and the Brain and Behavior Research Foundation supported the research.
Source: University of Michigan
A new study challenges the notion that minority students are less likely to complete their undergraduate degree if they attend minority-serving colleges and universities.
Looking strictly at graduation statistics, historically black colleges and universities (HBCUs) lag about 7 percent below traditional institutions, and Hispanic-serving institutions (HSIs) trail by about 11 percent.
But graduation figures don’t tell the whole story, says Stella Flores, associate professor of public policy and higher education at Vanderbilt University.
For a new study published online in Research in Higher Education, researchers culled data from the state of Texas, where there is a large concentration of minority-serving institutions (MSIs), and found that HBCUs and HSIs often have a student body that is less academically prepared than traditional schools—and tend to receive less financial aid.Unfair criticism
These and other differences in student population skew the statistics and unfairly put MSIs in a bad light, Flores says. At the same time, MSIs often function with limited institutional resources.
“Minority-serving institutions are doing more with less,” says coauthor Toby J. Park, assistant professor and senior research associate in the Center for Postsecondary Success at Florida State University. “And that needed to be factored into the analysis.”
To determine a true apples-to-apples comparison on the likelihood of degree completion for black and Hispanic students, the researchers compared students who were similar in preparation and background at MSIs and traditional schools.
“When all the variables were factored in, we found there was no difference in a student’s likelihood of graduating based on if they were enrolled in a minority-serving institution or a traditional school,” Flores says.
Many MSIs have unfairly been on the receiving end of criticism, she says.
“MSIs are viable and crucial contenders for increasing the rate of degree completion in America. Given the growing demographic student diversity in Texas and the nation, attention should be given to how well these schools are performing in the face of significant challenges.
“Not doing so may lead to enormous consequences to the health of the state and national workforce.”
Source: Vanderbilt University
The post Minority colleges get a bad rap for graduation rates appeared first on Futurity.
A tamoxifen gel applied to the breast may work as well as a pill form of the drug to slow the growth of cancer cells.
Because the drug is absorbed through the skin directly into breast tissue, less of it enters the blood, potentially minimizing dangerous side effects such as blood clots and uterine cancer.Related Articles On Futurity
- Purdue UniversityIn Sudan, efforts to erase breast cancer stigma
- Princeton UniversityAbsent breast milk protein may signal cancer
- Vanderbilt UniversityFolate may lower breast cancer risk
The gel was tested on women diagnosed with non-invasive cancer ductal carcinoma in situ (DCIS) in which abnormal cells multiply and form a growth in a milk duct.
Because of potential side effects, many women with DCIS are reluctant to take oral tamoxifen after being treated with breast-saving surgery and radiation despite the drug’s effectiveness to prevent DCIS recurrence and to lower the risk of future breast cancer.
“Delivering the drug though a gel, if proven effective in larger trials, could potentially replace oral tamoxifen for breast cancer prevention and DCIS and encourage many more women to take it,” says lead author Seema Khan, professor of surgery and professor of cancer research at Northwestern University Feinberg School of Medicine.Collateral damage
“For breast cancer prevention and DCIS therapy, effective drug concentrations are required in the breast. For these women, high circulating drug levels only cause collateral damage. The gel minimized exposure to the rest of the body and concentrated the drug in the breast where it is needed.
“There was very little drug in the bloodstream, which should avoid potential blood clots as well as an elevated risk for uterine cancer.”
Women who have completed surgery and radiation are given oral tamoxifen for five years to reduce the risk of the DCIS recurring at the same place and of new breast cancer appearing elsewhere in the same breast or the other breast. Tamoxifen is an anti-estrogen therapy for a type of breast cancer that requires estrogen to grow.
For a new study published in Clinical Cancer Research, researchers conducted a phase II clinical trial to compare the effects of the gel, 4-OHT, with oral tamoxifen. They found after six to 10 weeks of gel application that the reduction in a marker for cancer cell growth, Ki-67, in breast tissue was similar to that of oral tamoxifen.More effective for some women
They also found equal amounts of 4-OHT present in the breast tissue of patients who used the gel or took the oral drug, but the blood levels of 4-OHT were more than five times lower in those who used the gel.
The reduction in the levels of 4-OHT in the blood also was correlated with a reduction in proteins that cause blood clots.
The study involved 26 women, ages 45 to 86, who had been diagnosed with DCIS that was sensitive to estrogen (estrogen-receptor-positive DCIS). Half the women received the gel, which they applied daily, and half the oral drug, which they took daily.
The gel application may also be more effective for some women. Oral tamoxifen doesn’t help all women who take it because it needs to be activated in the liver by specific enzymes and about a third of women lack these enzymes, Khan says. These women may not receive full benefits from the pill.
The National Cancer Institute of the National Institutes of Health and BHR Pharma, LLC supported the research.
Source: Northwestern University
Some sensitive rainforest-restricted species may survive climate change, but only if the change isn’t too fast or dramatic, according to a new study with flies.
Previous research offered a bleak prospect for tropical species’ adaptation to climate change. One of the lead researchers of the new study, Belinda Van Heerwaarden, says the impact of climate change on the world’s biodiversity is largely unknown.
“Whilst many believe some species have the evolutionary potential to adapt no one really knows for sure, and there are fears that some could become extinct.”Updating the experiment
Van Heerwaarden and Carla M. Sgrò expanded on an experiment from the 2000s in which tropical flies native to Australian rain forests, called Drosophila birchii, were taken out of the damp rainforest and exposed to very dry conditions to mimic the effects of potential climate change.Related Articles On Futurity
- Washington University in St. LouisIs the weather getting stormier?
- University of California, DavisClimate policy, not change, worries farmers most
- Yale UniversityAlarmed to dismissive: U.S. views on global warming
In the original experiment the flies died within hours and despite rescuing those that survived longest and allowing them to breed for over 50 generations, the flies were no more resistant, suggesting they didn’t have the evolutionary capacity to survive.
In Van Heerwaarden and Sgrò’s version they changed the conditions from 10 percent to 35 percent humidity.
“The first experiment tested whether the flies could survive in 10 per cent relative humidity. That’s an extreme level that’s well beyond the changes projected for the wet tropics under climate change scenarios over the next 30 years.”
“In our test we decreased the humidity to 35 percent, which is much more relevant to predictions of how dry the environment will become in the next 30 to 50 years. We discovered that when you change the environment, you get a totally different answer,” says Van Heerwaarden.Diverse genes
While on average most of the flies died after just 12 hours, some survived a little longer than others. By comparing different families of flies, the researchers discovered that the flies genes influences the difference in their resistance.
To test this theory, the longest-living flies were rescued and allowed to breed. After just five generations, one species evolved to survive 23 percent longer in 35 percent humidity.
As well as looking at the potential impact of climate change, the research also highlights the importance of genetic diversity within species.
Sgrò says this finding suggests there is genetic variation present in these flies, which means they can evolve in response to climate change.Persist ‘a little longer’
“Tropical species make up the vast majority of the world’s biodiversity and climactic models predict these will be most vulnerable to climate change. However these models do not consider the extent to which evolutionary response may buffer the negative impacts of climate change,” says Sgró.
“Our research indicates that the genes that help flies temporarily survive extreme dryness are not the same as those that help them resist more moderate conditions. The second set of genes are the ones that enable these flies to adapt,” she says.
“We have much work to do but this experiment gives us hope that some tropical species have the capacity to survive climate change,” says Sgrò.
The results mean that other species thought to be at serious risk might have some hope of persisting a little longer under climate change than previously thought. The findings appear in the Proceedings of the Royal Society B.
In the next phase of the research study, Van Heerwaarden and Sgrò will investigate whether the climactic stress tolerated by the tropical flies extends to other species.
Source: Monash University
Patients with dementia are more likely to have pacemakers implanted for irregular heart rhythm, such as atrial fibrillation, than are people without cognitive difficulties.
In a research letter published online today in JAMA Internal Medicine, the researchers note the finding runs counter to expectations that less aggressive interventions are the norm for patients with the incurable and disabling illness.Related Articles On Futurity
- University of PittsburghBrain plaques more likely in seniors with stiff arteries
- Northwestern UniversitySeniors struggle with stock picks
- Washington University in St. LouisSleep trouble may be Alzheimer’s warning
To look at the relationships between cognitive status and implantation of a pacemaker, lead investigator Nicole Fowler, a health services researcher formerly at the University of Pittsburgh School of Medicine, and her team examined data from 33 Alzheimer Disease Centers (ADCs) entered between September 2005 and December 2011 into the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set.
Data from more than 16,000 people who had a baseline and at least one follow-up visit at an ADC were reviewed. At baseline, 48.5 percent of participants had no cognitive impairment, 21.3 percent had a mild cognitive impairment (MCI), and 32.9 percent had dementia.
The researchers found that participants with cognitive impairment were significantly older and more likely to be male, have ischemic heart disease, and a history of stroke. Rates of atrial fibrillation and congestive heart failure were similar among the groups.
The likelihood of getting a pacemaker, a device that regulates the heart beat, was lowest for those who had no cognitive difficulties and highest for dementia patients.
“Participants who had dementia before assessment for a new pacemaker were 1.6 times more likely to receive a pacemaker compared to participants without cognitive impairment, even after clinical factors were taken into account,” says Fowler, now at Indiana University.
“This was a bit surprising because aggressive interventions might not be appropriate for this population, whose lives are limited by a severely disabling disease. Future research should explore how doctors, patients, and families come to make the decision to get a pacemaker.”
There was no difference among the groups in the rates of implantation of cardioverter defibrillators, which deliver a small shock to get the heart to start beating again if it suddenly stops.
Coauthors of the paper contributed from University of Pittsburgh and Duke University Medical Center.
The Agency for Healthcare Research and Quality and National Institutes of Health, National Institute on Aging supported the research.
Source: University of Pittsburgh
Scientists say it’s possible to predict first impressions based on different facial features, such as eye height or eyebrow width.
The researchers developed a model based on 65 different physical features. They used the model to predict how people would make quick judgments about another person’s character, for example whether the person was friendly, trustworthy, or competent.
The study, published in the Proceedings of the National Academy of Sciences, shows how important faces and specific images of faces can be in creating a favorable or unfavorable first impression.
“Showing that even supposedly arbitrary features in a face can influence people’s perceptions suggests that careful choice of a photo could make (or break) others’ first impressions of you,” says Richard Vernon, a PhD student who was part of the research team from the University of York.Related Articles On Futurity
- University of California, Santa BarbaraIs hazing part of our evolved psychology?
- Michigan State UniversitySocial status affects love of native tigers
- University of WarwickWhy evil eyebrows look so scary
The team also applied the model in reverse and created cartoon-like faces that produced predictable first impressions. These images also illustrate the features that are associated with particular social judgements.
“In everyday life I am not conscious of the way faces and pictures of faces are influencing the way I interact with people. Whether in ‘real life’ or online; it feels as if a person’s character is something I can just sense,” says Tom Hartley, who co-led the research with Professor Andy Young.
“These results show how heavily these impressions are influenced by visual features of the face. It’s quite an eye-opener!” adds Hartley.
The impressions we create through images of our faces (“avatars” or “selfies”) are becoming more and more important in a world where we increasingly get to know one another online rather than in the flesh.
“We make first impressions of others so intuitively that it seems effortless. I think it’s fascinating that we can pin this down with scientific models,” says Clare Sutherland, a PhD student at York. “I’m now looking at how these first impressions might change depending on different cultural or gender groups of perceivers or faces.”
Source: University of York
The post Facial features can make or break first impressions appeared first on Futurity.
Even very brief running—just 5 to 10 minutes a day—can help people live longer, according to new research.
“Running is one of the most convenient and popular exercises,” says Duck-chul “D.C.” Lee, an assistant professor in kinesiology at Iowa State University.
“Running is good for your health—but more may not be better. You don’t have to think it’s a big challenge. We found that even 10 minutes per day is good enough. You don’t need to do a lot to get the benefits from running.”
The study, published in the Journal of the American College of Cardiology, finds that leisure-time runners are expected to live three years longer than non-runners.
The research shows that running can reduce a person’s all-cause mortality rate by 30 percent and cardiovascular mortality rate by 45 percent. This means that running can reduce all mortal health risks, such as cancer, stroke, and heart attack, by nearly a third. Cardiovascular risks are cut nearly in half.
People who ran less than an hour each week showed the same mortality benefits compared to those who ran more than three hours in each week, Lee says.Does exercise have its limits?
But extensive running can sometimes cause more harm than good. There is also a chance people who do go above and beyond with exercise are opening themselves up for greater risk of joint damage, bone damage, and heart attacks.
“Most people know that exercise is good for their health,” Lee says. “With too much of vigorous-intensity aerobic exercise, there might be a side effect. Is there any limit that we shouldn’t go over? It is possible that people who do too much might be harming their health. However, we need more studies on this important issue.Related Articles On Futurity
- Michigan State UniversityTo get a better workout, get a virtual exercise buddy
- University of OregonHeat up to work out in cool temps
- Purdue University'ReadingMate' reduces treadmill eye bobble
Lee looked at data that monitored more than 50,000 individuals’ workout habits for 15 years. He drew conclusions by identifying the cause of death in each individual and relating it to the amount of exercise the person completed weekly.
Lavie says the research is perhaps the largest study about running to date. He says it has long-term follow-up to assess both all-cause and total mortality.
“It is one of the few [studies] that has information about running doses and changes in running patterns over time,” says study coauthor Carl “Chip” Lavie, a cardiologist and professor at the University of Queensland School of Medicine in New Orleans. “It shows that even small running doses, such as six miles per week, one to two times a week, and slower than 10-minute miles, appear to be associated with maximal mortality benefits.”Not much time, not much money
The work emphasizes the importance of running for a few minutes a day.
“The big problem for many is not having enough time to exercise,” says Lavie. “Our study demonstrates that one can gain substantial reduction in mortality risks even with low doses of running.”
The availability and affordably of running is what attracted Lee to look at the sport’s impact.
“Many studies need some kind of equipment,” Lee says. “Running is so convenient and popular. Most people can run. It’s easy and there are a good number of people who are interested in running as an exercise.”
Researchers from University of South Carolina and Louisiana State University also contributed to the study.
Source: Iowa State University
Trade-offs, which are evolutionary compromises, drive the diversity of life, according to new research.
“Biologists have long known that when species compete for limited resources such as food, they are pressured to diversify,” says Chris Adami, professor of microbiology and molecular genetics at Michigan State University.Related Articles On Futurity
- University of California, DavisReef fish resemble ancestors from 50M years ago
- Cornell UniversityTeam cracks open watermelon genome
- Stanford UniversityToastier temps make for smaller sheep
“But what we found through computer simulation is that trade-offs are the main driver of diversification when resources are scarce. The stronger the trade-offs, the more diversification will occur.”
However, there is price to pay when diversifying.
“Trade-offs in biology can take many forms, but in general, they imply that organisms cannot optimize all traits at the same time,” said Bjorn Ostman, post-doctoral research associate in the Adami lab.
In other words, they can’t have it all.Lemur food
This reality is well known in the animal world. For example, in Madagascar, lemurs have evolved to eat different things, which ensures that there is enough food for all. Some lemur species eat fruit, but due to trade-offs, cannot eat leaves because their digestive tracts have become too short and cannot process the fiber.
Other lemurs have long digestive tracts and can eat leaves, but they get sick from eating fruit because it ferments from staying in their guts too long. So to preserve the supply and demand of food, a compromise evolved between fruit- and leaf-eating lemurs—they are biologically prevented from eating each other’s food.
“We did not know until now just how essential such trade-offs were in driving diversification among species,” Ostman says. “The computer simulations allowed us to remove other possible factors that influence speciation, such as geographical barriers.”
Elizabeth Ostrowski, assistant professor at the University of Houston (who was not involved in the study), says, “One of the things that is satisfying about this paper is that the authors show not only that distinct ecotypes—populations that have adapted to their specific environment—emerge and are stably maintained, but the number of distinct ecotypes increases with the severity of trade-offs. At the most extreme, there is a specialist ecotype for every different resource.”Preserving diversity
Evolution’s story is told only by the winners, Adami says.
“Even using the fossil record, we only ever see the end products, not how the process unfolds,” he says. “By using computer simulations of evolution, we now have a better understanding of how the evolutionary story unfolds, and we can use that knowledge to understand how diversity forms, and also how to preserve it.
“This kind of knowledge will ultimately be important for preserving our current ecosystems.”
Randall Lin, doctoral student at the California Institute of Technology (Caltech), is a coauthor of the study, which appears online in the journal BMC Evolutionary Biology.
Source: Michigan State University
Applying just the right amount of tension to a chain of carbon atoms can turn it from a metallic conductor to an insulator, report researchers.
Stretching the material known as carbyne—a hard-to-make, one-dimensional chain of carbon atoms—by just 3 percent can begin to change its properties in ways that engineers might find useful for mechanically activated nanoscale electronics and optics.
Until recently, carbyne has existed mostly in theory, though experimentalists have made some headway in creating small samples of the finicky material. The carbon chain would theoretically be the strongest material ever, if only someone could make it reliably.
The first-principle calculations by Rice University theoretical physicist Boris Yakobson and his coauthors, postdoctoral researcher Vasilii Artyukhov and graduate student Mingjie Liu, show that stretching carbon chains activates the transition from conductor to insulator by widening the material’s band gap.
Band gaps, which free electrons must overcome to complete a circuit, give materials the semiconducting properties that make modern electronics possible.‘The uncertainty principle in action’
In their previous work on carbyne, the researchers believed they saw hints of the transition, but they had to dig deeper to find that stretching would effectively turn the material into a switch.Related Articles On Futurity
- University of MichiganTesting water with nanotube paper strips
- Yale UniversityExoplanets made of diamonds: Do they exist?
- Iowa State UniversityThe unnatural charm of metamaterials
Each carbon atom has four electrons available to form covalent bonds. In their relaxed state, the atoms in a carbyne chain would be more or less evenly spaced, with two bonds between them. But the atoms are never static, due to natural quantum uncertainty, which Yakobson says keeps them from slipping into a less-stable Peierls distortion.
“Peierls said one-dimensional metals are unstable and must become semiconductors or insulators,” Yakobson says. “But it’s not that simple, because there are two driving factors.”
One, the Peierls distortion, “wants to open the gap that makes it a semiconductor.” The other, called zero-point vibration (ZPV), “wants to maintain uniformity and the metal state.”
Yakobson explains that ZPV is a manifestation of quantum uncertainty, which says atoms are always in motion.
“It’s more a blur than a vibration,” he says. “We can say carbyne represents the uncertainty principle in action, because when it’s relaxed, the bonds are constantly confused between 2-2 and 1-3, to the point where they average out and the chain remains metallic.”
But stretching the chain shifts the balance toward alternating long and short (1-3) bonds. That progressively opens a band gap beginning at about 3 percent tension, according to the computations. The team created a phase diagram to illustrate the relationship of the band gap to strain and temperature.The carbyne conundrum
How carbyne is attached to electrodes also matters, Artyukhov says. “Different bond connectivity patterns can affect the metallic/dielectric state balance and shift the transition point, potentially to where it may not be accessible anymore,” he says. “So one has to be extremely careful about making the contacts.”
“Carbyne’s structure is a conundrum,” he says. “Until this paper, everybody was convinced it was single-triple, with a long bond then a short bond, caused by Peierls instability.” He says the realization that quantum vibrations may quench Peierls, together with the team’s earlier finding that tension can increase the band gap and make carbyne more insulating, prompted the new study.
“Other researchers considered the role of ZPV in Peierls-active systems, even carbyne itself, before we did,” Artyukhov says. “However, in all previous studies only two possible answers were being considered: either ‘carbyne is semiconducting’ or ‘carbyne is metallic,’ and the conclusion, whichever one, was viewed as sort of a timeless mathematical truth, a static ‘ultimate verdict.’
“What we realized here is that you can use tension to dynamically go from one regime to the other, which makes it useful on a completely different level.”
Yakobson notes the findings should encourage more research into the formation of stable carbyne chains and may apply equally to other one-dimensional chains subject to Peierls distortions, including conducting polymers and charge/spin density-wave materials.
The Robert Welch Foundation, the US Air Force Office of Scientific Research, and the Office of Naval Research Multidisciplinary University Research Initiative supported the research, which appears in the journal Nano Letters.
The researchers used the Data Analysis and Visualization Cyberinfrastructure (DAVinCI) supercomputer supported by the NSF and administered by Rice’s Ken Kennedy Institute for Information Technology.
Source: Rice University
Vouchers to buy fresh fruits and vegetables at farmers markets could mean healthier meals for families on food assistance, new research suggests.
“In terms of healthy food options, farmers market incentives may be able to bring a low-income person onto the same playing field as those with greater means,” says Carolyn Dimitri, associate professor of food studies at New York University and lead author of the study in the journal Food Policy.Related Articles On Futurity
- University of FloridaFruit could taste better with far-red light
- Carnegie Mellon UniversityCellphones may not raise car crash risk
- Washington University in St. Louis'Lean' gut microbes fight weight gain, but diet is key
Economically disadvantaged families tend to consume diets low in fruits and vegetables, partially due to poor access to healthy food and their inability to pay for it.
Farmers markets may help fill in gaps in communities commonly referred to as “food deserts,” which lack access to fresh, healthy food.
One in four farmers markets in the US accepts Supplemental Nutrition Assistance Program (SNAP) benefits, formerly known as food stamps.
In recent years, several local governments and nonprofit organizations have augmented federal food assistance by offering vouchers to use at farmers markets. The vouchers increase the value of food assistance when used to buy fruits and vegetables at markets.
While most food assistance programs fail to address nutritional quality—for instance, SNAP benefits can be used to buy ice cream and soda—farmers market incentives can only be used on fresh produce, increasing their potential to improve consumers’ diets.Food shopping habits
To assess the effect of farmers market incentives on those receiving food assistance, researchers enrolled 281 economically disadvantaged women in their study, recruiting participants at five farmers markets in New York, San Diego, and Boston.
The women were all caring for young children and received federal food assistance through SNAP or Women, Infants, and Children (WIC).
Researchers collected demographic information and surveyed the participants throughout the 12-16 week study to learn about their food shopping habits and fresh vegetable consumption. Each time participants shopped at the farmers market, they received up to $10 in vouchers to be used toward purchasing fruits and vegetables. The women matched the amount of the farmers market vouchers with cash or federal food benefits.
Despite incentives, retaining participants was a challenge, suggesting that factors other than incentives influence farmers market shopping habits. A total of 138 participants completed the study, which is consistent with retention rates for similar studies. Women who were older, visited food banks, and lived in “food deserts” were the most likely to drop out of the study.Seasonal solution
For those who completed the study, more than half reported consuming vegetables more frequently at the end of the study. Participants with low levels of education and those who consumed little fresh produce at the beginning of the study were the most likely to increase the amount of produce in their diets.
“Our food choices are very complex, and issues with food security won’t be solved with a single program,” Dimitri says. “Even though not all participants increased their consumption of produce, our study suggests that nutrition incentives are a promising option that can help economically disadvantaged families eat healthier diets.”
Additional research is needed to understand why produce consumption did not increase among nearly half of the participants, despite their increased purchasing power, and determine what measures can be taken to engage the vulnerable group that dropped out of the study.
While farmers markets are good sources of healthy food, the researchers note that relying on them exclusively for food security is problematic, as markets are usually open on limited days and closed in the winter.
Researchers from Penn State, University of California, San Diego School of Medicine, and the Wholesome Wave Foundation, a nonprofit organization working to improve affordability and access to fresh, locally grown food contributed to the study.
The post Farmers market vouchers fill gap in ‘food deserts’ appeared first on Futurity.
Peer-led interventions that target parents’ well-being can significantly reduce stress, depression, and anxiety among mothers of children with disabilities, new research suggests.
For a new study, researchers examined two treatment programs in a large number of primary caregivers of a child with a disability. Participants in both groups experienced improvements in mental health, sleep, and overall life satisfaction and showed less dysfunctional parent-child interactions.Related Articles On Futurity
- University of WarwickBully, victim likely to think suicide by age 11
- University of California, BerkeleyDream sleep soothes painful memories
- Autistic kids build skills with Legos
“The well-being of this population is critically important because, compared to parents of typically developing children, parents of children with developmental disabilities experience substantially higher levels of stress, anxiety, and depression, and as they age, physical and medical problems,” says lead author Elisabeth Dykens, professor of psychology and human development, pediatrics, and psychiatry at Vanderbilt University.
“Add to this the high prevalence of developmental disabilities—about one in five children—and the fact that most adult children with intellectual disabilities remain at home with aging parents, we have a looming public health problem on our hands.”
For a new study, published in Pediatrics, nearly 250 mothers of children with autism or other disabilities were randomized into one of two programs: Mindfulness-Based Stress Reduction (MBSR) and Positive Adult Development (PAD). The MBSR approach is more physical, emphasizing breathing exercises, deep belly breathing, meditation, and gentle movement. The PAD approach is more cognitive and uses exercises such as practicing gratitude.Less anxiety, better sleep
Supervised peer mentors, all mothers of children with disabilities, received four months of training on the intervention curriculum, the role of a mentor, and research ethics. The peer mentors led six weeks of group treatments in 1.5-hour weekly sessions with the research participants.
At baseline, 85 percent of participants had significantly elevated stress, 48 percent were clinically depressed, and 41 percent had anxiety disorders.
Both the MBSR and PAD treatments led to significant reductions in stress, depression, and anxiety and improved sleep and life satisfaction among participants, and mothers in both treatments also showed fewer dysfunctional parent-child interactions.
While mothers in the MBSR treatment saw the greatest improvements, participants in both treatments continued to improve during follow-up, and improvements in other areas were sustained up to six months after treatment.Shorter telomeres
“Our research and findings from other labs indicate that many mothers of children with disabilities have a blunted cortisol response, indicative of chronic stress,” Dykens says. “Compared to mothers in control groups, this population mounts a poorer antibody response to influenza vaccinations, suggesting a reduced ability to fight both bacterial and viral infections.
“They also have shorter telomeres, associated with an advanced cellular aging process, and have poorer sleep quality, which can have deleterious health effects. All of this results in parents who are less available to manage their child’s special needs or challenging behaviors.”
Dukens and colleagues will next examine how fathers fared in the interventions and the health status and medical conditions in mothers. They will also study the differences in civilian versus military parents of children with developmental disabilities.
The National Institutes of Health’s National Center for Complementary and Alternative Medicine, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Center for Advancing Translational Sciences, and the National Institute of Mental Health funded the study.
Source: Vanderbilt University
The decline of wildlife can cause hunger and unemployment and, consequently, fuel increased crime and political instability, report researchers.
In the nineteenth century, some scholars say the near-extinction of the American bison led to the near-collapse of midwestern Native American cultures. That other civilizations have been affected in similar ways demonstrates the deep interconnectedness of the health of a society and the health of its wildlife.
“These links are poorly recognized by many environmental leaders,” says Douglas McCauley, an assistant professor in the department of ecology, evolution, and marine biology at University of California, Santa Barbara.Related Articles On Futurity
- Duke UniversityRobot sparrow starts a flap with rivals
- Duke UniversityAre new regulations too tough on coal?
- University of MissouriNew primate has cute face and toxic tongue
He offers other examples of this correlation: “The population crash of cod caused the disintegration of centuries-old coastal communities in Canada and cost billions of dollars in relief aid. The collapse of fisheries in Somalia contributed to explosions in local and international maritime violence.”
According to lead author Justin Brashares, associate professor of ecology and conservation in UC Berkeley’s department of environmental science, policy, and management, the effects of global wildlife declines drive violent conflicts, organized crime, and even child labor, necessitating a far greater collaboration with disciplines beyond conservation biology.
“Impoverished families rely upon wildlife resources for their livelihoods,” he says. “We can’t apply economic models that prescribe increases in prices or reduced demand as supplies become scarce.
“Instead, more labor is required to capture scarce wild animals and fish, and children are a major source of cheap labor. Hundreds of thousands of impoverished families are selling their kids to work in harsh conditions.”Wildlife trafficking
The paper, published in Science, connects the dots between the rise of piracy and maritime violence in Somalia to battles over fishing rights. What began as an effort to repel foreign vessels illegally trawling in Somali waters escalated into hijacking fishing and then nonfishing vessels for ransom.
The authors compare wildlife poaching to the drug trade, noting that huge profits from the trafficking of luxury wildlife goods such as elephant tusks and rhino horns have attracted guerilla groups and crime syndicates worldwide.
They point to the Lord’s Resistance Army, al-Shabab, and Boko Haram as groups known to use wildlife poaching to fund terrorist attacks.
McCauley and colleagues note that solving the problem of wildlife trafficking is every bit as complex as slowing drug trafficking and will require a multi-pronged approach.
“What we don’t want to do is simply start a war on poachers that copies methods being used without great success in our war on drugs,” says McCauley, who began this work as a postdoctoral researcher in Brashares’ lab.Children, crime, communities
The report stresses the hopefulness of addressing the problems of wildlife loss.
“Fixing social problems that stem from a scarcity of wildlife is different—and fundamentally more hopeful—than fixing social problems that arise from other types of natural resource scarcity,” McCauley says. “With money and good politics we can breed more rhino, but we can’t make more diamonds or oil.”
As potential models for an integrated approach, the researchers point to organizations and initiatives in the field of climate change, such as the Intergovernmental Panel on Climate Change and the United for Wildlife Collaboration. But, they note, multidisciplinary programs that address wildlife declines at local and regional levels must accompany those global efforts.
As examples, they cite local governments in Fiji and Namibia that head off social tension in their respective countries by granting exclusive rights to hunting and fishing grounds and by using management zones to reduce poaching and improve the livelihoods of local populations.
“This prescribed revisioning of why we should conserve wildlife helps make clearer what the stakes are in this game,” says McCauley. “Losses of wildlife essentially pull the rug out from underneath societies that depend on these resources.
“We are not just losing species; we are losing children, breaking apart communities, and fostering crime. This makes wildlife conservation a more important job than it ever has been.”
Source: UC Santa Barbara
A new test increases the odds by 30 percent that people with thyroid cancer will undergo the correct initial surgery.
“Before this test, about one in five potential thyroid cancer cases couldn’t be diagnosed without an operation to remove a portion of the thyroid,” says Linwah Yip, assistant professor of surgery in the University of Pittsburgh School of Medicine.
Yip says without the test a second surgery to remove the thyroid was often required if the portion removed during the first surgery came back positive for cancer.
“The molecular testing panel now bypasses that initial surgery, allowing us to go right to fully removing the cancer with one initial surgery. This reduces risk and stress to the patient, as well as recovery time and costs,” adds Yip, lead author of the study published in the Annals of Surgery.Related Articles On Futurity
- University of Virginia'Master switch' may be key to brain cancer
- University of PittsburghBiodegradable graft lets artery regrow in 90 days
- Johns Hopkins UniversityTiny gene poses huge problem for cancer cells
Cancer in the thyroid, which is located in the “Adam’s apple” area of the neck, is now the fifth most common cancer diagnosed in women. Thyroid cancer is one of the few cancers that continues to increase in incidence, although the five-year survival rate is 97 percent.
Previously, the most accurate form of testing for thyroid cancer was a fine-needle aspiration biopsy, where a doctor guides a thin needle to the thyroid and removes a small tissue sample for testing. However, in 20 percent of these biopsies, cancer cannot be ruled out.
A lobectomy, which is a surgical operation to remove half of the thyroid, is then needed to diagnose or rule-out thyroid cancer. In the case of a postoperative cancer diagnosis, a second surgery is required to remove the rest of the thyroid.
Researchers have identified certain gene mutations that are indicative of an increased likelihood of thyroid cancer, and the new molecular testing panel can be run using the sample collected through the initial, minimally invasive biopsy, rather than a lobectomy. When the panel shows these mutations, a total thyroidectomy is advised.
Yip and her colleagues followed 671 patients with suspicious thyroid nodes who received biopsies. Approximately half the biopsy samples were run through the panel, and the other half were not. Patients whose tissue samples were not tested with the panel had a 2.5-fold higher statistically significant likelihood of having an initial lobectomy and then requiring a second operation.
“We’re currently refining the panel by adding tests for more genetic mutations, thereby making it even more accurate,” says coauthor Yuri Nikiforov, a professor in the pathology department. “Thyroid cancer is usually very curable, and we are getting closer to quickly and efficiently identifying and treating all cases of thyroid cancer.”
A grant from UPMC funded the study.
Source: University of Pittsburgh
A potentially deadly amoeba often found in lakes and rivers is likely benefiting from a widespread drought in the United States.
The drought is making water warmer than usual, allowing the heat-loving amoeba to proliferate.
A nine-year-old Kansas girl recently died of an infection caused by this parasite after swimming in lakes. The amoeba enters the body through the nose and travels to the brain. Nose plugs can lower the odds of the rare but fatal pathogen entering the body.Related Articles On Futurity
- Next 10 years: Food crisis fueled by Asia’s droughts
- University of MelbourneYouth protects malaria parasite from drugs
- University of MichiganDust mites show evolution in reverse
The amoeba, Naegleria fowleri, is classified as a sapronosis, an infectious disease caused by pathogenic microorganisms that inhabit aquatic ecosystems and soil rather than a living host.
To quantify the differences between sapronoses and conventional infectious diseases, researchers developed a mathematical model using population growth rates. Of the 150 randomly selected human pathogens examined in this research, one-third turned out to be sapronotic—specifically 28.6 percent of the bacteria, 96.8 percent of the fungi, and 12.5 percent of the protozoa.
The team reports their findings in the journal Trends in Parasitology.
“Sapronoses do not follow the rules of infectious diseases that are transmitted from host to host,” says lead author Armand Kuris, a professor at University of California, Santa Barbara. “They are categorically distinct from the way we think infectious diseases should operate.
“The paper tries to bring this group of diseases into sharp focus and get people to think more clearly about them.”No virulence trade-off
A well-known example of a sapronosis is Legionnaires’ disease, caused by the bacteria Legionella pneumophila, which can be transmitted by aerosolized water and/or contaminated soil. The bacteria can even live in windshield-wiper fluid. Legionnaires’ disease acquired its name in July 1976, when an outbreak of pneumonia occurred among people attending an American Legion convention at the Bellevue-Stratford Hotel in Philadelphia. Of the 182 reported cases, mostly men, 29 died.
A major group of emerging diseases, sapronotic pathogens can exist independently in an environmental reservoir like the cooling tower of the Philadelphia hotel’s air conditioning system. Some, like the cholera protozoa, rely on mosquitoes to find disease hosts for them. Zoonoses, by contrast, require a human host.
According to Kuris, diseases borne by a vector—a person, animal, or microorganism that carries and transmits an infectious pathogen into another living organism—are more or less virulent depending on how efficiently they are transmitted.
As a result, virulence evolves to a level where it is balanced with transmission in order to maximize the spread of the virus. However, Kuris notes that there is no virulence trade-off for sapronotic disease agents.
Transmission of a sapronosis pathogen is able to persist regardless of any changes in host abundance or transmission rates.
“You can’t model a sapronosis like valley fever with classic models for infectious diseases,” says coauthor Kevin Lafferty, adjunct faculty at UC Santa Barbara and a marine ecologist with the Western Ecological Research Center of the US Geological Survey.
“To combat sapronoses, we need new theories and approaches. Our paper is a start in that direction.”
Source: UC Santa Barbara
Many people value rewards they choose themselves more than rewards they just receive, even when the rewards are actually equivalent. A new study suggests that this quirk arises from how the brain reinforces learning from reward.
So, the next time a friend raves about the movie he chose and is less enthusiastic about the just-as-good one that you picked, you might be able to chalk it up to his basic learning circuitry and a genetic difference that affects it.
The new research links “credit assignment”—how the brain reinforces learning only in the exact circuits that caused the rewarding choice—to an oft-observed quirk of behavior called “choice bias”—we value the rewards we choose more than equivalent rewards we don’t choose.
The researchers used computational modeling and behavioral and genetic experiments to discover evidence that choice bias is essentially a byproduct of credit assignment.
“We weren’t looking to explain anything about choice bias to start off with,” says lead author Jeffrey Cockburn, a graduate student in the research group of senior author Michael Frank, associate professor of cognitive, linguistic, and psychological sciences at Brown University.
“This just happened to be the behavioral phenomenon we thought would emerge out of this credit assignment model.”Amped up dopamine?
The model, developed by Frank, Cockburn, and coauthor Anne Collins, a postdoctoral researcher, was based on prior research on the function of the striatum, a part of the brain’s basal ganglia (BG) that is principally involved in representing reward values of actions and picking one.Related Articles On Futurity
- University of California, DavisDoes feedback loop keep memory from fading?
- University of Texas at AustinBacteria help trace how alcohol binds to brain
- Indiana UniversityBrain's highway speeds communication
“An interaction between three key BG regions moderates that decision-making process. When a rewarding choice has been made, the substantia nigra pars compacta (SNc) releases dopamine into the striatum to reinforce connections between cortex and striatum, so that rewarded actions are more likely to be repeated. But how does the SNc reinforce just the circuits that made the right call?
The authors proposed a mechanism by which another part of the subtantia nigra, the SNr, detects when actions are worth choosing and then simultaneously amplifies any dopamine signal coming from the SNc.”
“The novel part here is that we have proposed a mechanism by which the BG can detect when it has selected an action and should therefore amplify the dopamine reinforcing event specifically at that time,” Frank says.
“When the SNr decides that striatal valuation signals are strong enough for one action, it releases the brakes not only on downstream structures that allow actions to be executed, but also on the SNc dopamine system, so any unexpected rewards are amplified.”
Specifically, dopamine provides reinforcement by enhancing the responsiveness of connections between cells so that a circuit can more easily repeat its rewarding behavior in the future. But along with that process of reinforcing the action of choosing, the value placed on the resulting reward becomes elevated compared to rewards not experienced this way.Rewarding games
That prediction seemed intriguing, but it still had to be tested. The authors identified both behavioral and genetic tests that would be telling.
They recruited 80 people at Brown and elsewhere in Providence to play a behavioral game and to donate some saliva for genetic testing.
The game first presented the subjects pictures of arbitrary Japanese characters that would have different probabilities of rewards if chosen ranging from a 20 percent to 80 percent chance of winning a point or losing a point.
For some characters, the player could choose a character to discover its resulting reward or penalty, whereas for others, its result was simply given to them.
After that learning phase, the subjects were then presented the characters in pairs and instructed to pick the one they thought had the highest chance of winning based on what they had learned.
The researchers built the game so that for every character a player could choose, there was an equally rewarding one that had merely been given to them. On average, players showed a clear choice bias in that they were more likely to prefer rewarding characters that they had chosen over equally rewarding characters they had been given.
Notably, they exhibited no choice bias between unrewarding characters suggesting that choice bias emerges only in relation to reward, one of the key predictions of their model. But they wanted to test further whether the impact of reward on choice bias was related to the proposed biological mechanism, that striatal dopaminergic learning is enhanced to chosen rewards.A gene makes the difference
The genetic tests focused on single-letter differences in a gene called DARPP-32, which governs how well cells in the striatum respond to the reinforcing influence of dopamine.
People with one version of the gene have been shown in previous research to be less able to learn from rewards, while people with other versions were less driven by reward in learning.
“The reason why this gene is interesting is because we know something about the biology of what it does and where it is expressed in the brain,” Frank says.
“It’s predominant in the striatum and specifically affects synaptic plasticity induced by dopamine signaling. It’s related to the imbalance by which you learn from really good things or not so good things.
“The logic was if the mechanism that we think describes this choice bias and credit assignment problem is accurate then that gene should predict the impact of how good something was on this choice bias phenomenon,” he says.
Indeed, that’s what the data showed. People with the form of the gene that predisposed them to be responsive to big rewards also showed more choice bias from the most strongly rewarded characters.
Interestingly, the other people also showed choice bias, but more strongly for those characters that were more mediocre. This pattern was mirrored by the authors’ model when it simulated the effects of DARPP-32 on reward learning imbalances from positive vs. negative outcomes.
The National Institute of Mental Health funded the study, which appears in Neuron.
Source: Brown University
The post Why the things we choose seem better than the rest appeared first on Futurity.
Architecture, interior design, and other physical aspects of where new registered nurses work can enhance their job satisfaction, a new survey shows.
Job satisfaction can predict registered nurses’ job turnover, patient satisfaction, and nurse-sensitive patient outcomes (including pressure ulcers and falls), which can result in higher health care costs and penalties for hospitals that receive Medicare and Medicaid payments.Related Articles On Futurity
- University of RochesterFor at-risk moms, nurse visits save lives
- University of OregonHow librarians survived the search engine
- University at BuffaloPolice on night shift at greater risk for serious injury
The study, in the current issue of Research in Nursing & Health, reveals that while physical environment had no direct influence on job satisfaction, it did have a significant indirect influence because the environment affected whether nurses could complete tasks without interruptions, communicate easily with other nurses and physicians, and/or do their jobs efficiently.
The research team conducted a nationwide survey of RNs to examine the relationship between RNs’ physical work environment and job satisfaction.
They found that RNs who gave their physical work environments higher ratings were also more likely to report better workgroup cohesion, nurse-physician relations, workload, and other factors associated with job satisfaction.A good investment
“Clearly, the physical work environment can affect nurses’ ability to do their jobs effectively and efficiently,” says Maja Djukic, assistant professor at the New York University College of Nursing.
“The right environment facilitates nurses’ work, which increases their job satisfaction, which in turn reduces turnover. All of those improve patient outcomes.
“When investing in facilities’ construction or remodeling, health care leaders should look at features that enhance workgroup cohesion, nurse-physician relations, and other factors that affect job satisfaction. Those investments will pay off in the long run.”
The researchers measured job satisfaction in terms of procedural justice, autonomy, nurse-physician relationships, distributive justice, opportunities for promotion, workgroup cohesion, and variety in one’s job.
Physical environment was assessed based on the architectural, ambient, and design features of the workspace, including crowdedness, ventilation, lighting, arrangement of furniture, colors and decorations, aesthetic appearance, and the need for remodeling.
“This study supports our previous findings, which indicate that investing in improving nurses’ work environments is extremely worthwhile,” says Professor Christine Kovner.
“We’d suggest that future studies delve into which aspects of the physical work environment best support the factors that enhance nurses’ job satisfaction.”
Carol Brewer, professor at the School of Nursing at the University at Buffalo co-led the study.
The study is based on a 98-question survey of 1,141 RNs, which is part of the Robert Wood Johnson Foundation’s RN Work Project, a nationwide, 10-year longitudinal survey of RNs begun in 2006 by Kovner and Brewer.
Nurses who completed the survey were licensed for the first time by exam between August 1, 2004, and July 31, 2005, in 34 states and the District of Columbia.