Scientists have developed a way to more accurately forecast nitrogen’s effects on the climate cycle. Incorporating this method shows the planet may be headed for a warmer future than was previously thought.
According to the researchers, models used by the Intergovernmental Panel on Climate Change until now have not provided realistic predictions of nitrogen emissions from the land to the air and water. Of the 12 climate models used by the IPCC, only one included nitrogen, and that model was not tracing nitrogen correctly.
“Our benchmarking methods will provide a way for all the models to include nitrogen, to communicate with each other, and to reduce their overall uncertainty,” says lead author Benjamin Houlton, a professor in the department of land, air and water resources at University of California, Davis.
“Including this information will likely reveal that the climate system is more sensitive than we anticipate, and it likely will be a warmer world than we think.”
The scientists identified the isotopic “fingerprints” of nitrogen, tracing its journey to model how nitrogen moves through ecosystems and how it escapes to the air or water. Reported in Nature Climate Change, the benchmarking technique is now being put into global models used by the IPCC.‘Too much of a good thing’
Nitrogen is a critical component of climate change. It determines how much carbon dioxide emissions natural ecosystems can absorb, and it directly warms the climate as nitrous oxide in the atmosphere. It occurs naturally in the air and water and also enters the environment through human-made agricultural fertilizers.
The benchmarking technique Houlton and his colleagues developed will help examine the fate of nitrogen fertilizers in the environment as well as the climate impacts of nitrogen.
“Nitrogen is a challenge facing humanity,” Houlton says. “It’s becoming too much of a good thing.
“We will hear more and more about nitrogen’s impact on human health and the environment in the future, but developing a more sophisticated scientific understanding of the nitrogen cycle is essential to provide policymakers, stakeholders, and the public better information to make decisions.”
The study’s coauthors include former UC Davis postdoctoral students Alison Marklein and Edith Bai. Funding came from from the Andrew W. Mellon Foundation and the National Science Foundation.
Source: UC Davis
Every time you make a memory, somewhere in your brain a tiny filament reaches out from one neuron and forms an electrochemical connection to a neighboring neuron.
The filaments that make these new connections are called dendritic spines and, in a series of experiments described in the Journal of Biological Chemistry, a team of researchers reports that a specific signaling protein, Asef2, a member of a family of proteins that regulate cell migration and adhesion, plays a critical role in spine formation.
This is significant because Asef2 has been linked to autism and the co-occurrence of alcohol dependency and depression.Flexible filaments
“Alterations in dendritic spines are associated with many neurological and developmental disorders, such as autism, Alzheimer’s disease, and Down syndrome,” says study leader Donna Webb, associate professor of biological sciences at Vanderbilt University. “However, the formation and maintenance of spines is a very complex process that we are just beginning to understand.”
Neuron cell bodies produce two kinds of long fibers that weave through the brain: dendrites and axons. Axons transmit electrochemical signals from the cell body of one neuron to the dendrites of another neuron. Dendrites receive the incoming signals and carry them to the cell body. This is the way that neurons communicate with each other.
As they wait for incoming signals, dendrites continually produce tiny flexible filaments called filopodia. These poke out from the surface of the dendrite and wave about in the region between the cells searching for axons. At the same time, biologists think that the axons secrete chemicals of an unknown nature that attract the filopodia.
When one of the dendritic filaments makes contact with one of the axons, it begins to adhere and to develop into a spine. The axon and spine form the two halves of a synaptic junction. New connections like this form the basis for memory formation and storage.Immature or missing spines
Autism has been associated with immature spines, which do not connect properly with axons to form new synaptic junctions. However, a reduction in spines is characteristic of the early stages of Alzheimer’s disease. This may help explain why individuals with Alzheimer’s have trouble forming new memories.
The formation of spines is driven by actin, a protein that produces microfilaments and is part of the cytoskeleton. Webb and her colleagues showed that Asef2 promotes spine and synapse formation by activating another protein called Rac, which is known to regulate actin activity. They also discovered that yet another protein, spinophilin, recruits Asef2 and guides it to specific spines.
“Once we figure out the mechanisms involved, then we may be able to find drugs that can restore spine formation in people who have lost it, which could give them back their ability to remember,” says Webb.
The National Institutes of Health and National Center for Research Resources supported the work.
Source: Vanderbilt University
Children who are sensitive to the thoughts and feelings of others are more popular on the playground, report researchers.
Published in the journal Child Development, the study finds preschoolers and elementary school children who are good at identifying and responding to what others want, think, and feel are rated by their peers or teachers as being popular at school.
Understanding the mental perspectives of others could facilitate the type of interactions that help children become or remain popular, according to Professor Virginia Slaughter, head of the School of Psychology at University of Queensland.
“Our findings suggest that training children to be sensitive to the thoughts and feelings of others may improve their relationships with peers,” says Slaughter.
“This could be particularly important for children who are struggling with friendship issues, such as children who are socially isolated.”
Slaughter led the meta-analysis, which examined 20 studies addressing the relation between theory of mind and popularity to determine if there was a direct correlation between identifying the needs and wants of others and being popular.
The studies involved more than 2,000 children aged 2 to 10 years across Asia, Australia, Europe, and North America.
Popularity was measured by anonymous ratings from classroom peers and teachers.
“The meta-analysis has allowed us to look at the findings across multiple studies and confirm there is a direct link between theory of mind and popularity in children,” says Slaughter.
“The ability to tell what others are thinking, feelings and wanting is a basic precursor to emotional intelligence in adults.
“Understanding the mental perspectives of others is important both for making friends in the early school years and in maintaining friendships as children grow older.”
The study also found this link to be weaker for boys than girls, suggesting gender differences in how children relate to each other.
“Girls’ friendships tend to be more interpersonally oriented,” says Slaughter.
“Whereas boys may resolve a conflict by wrestling each other, girls often work out their differences through negotiation and that requires an understanding of the other person’s perspective.”
Source: University of Queensland
Scientists may have uncovered a natural way to save plants from attack by recreating a natural insect repellent.
The researchers have created tiny molecules which mirror a natural occurring smell known to repel insects by providing an enzyme, ((S)-germacrene D synthase), which creates the smell, with alternative substrate molecules.
For their new study, published in the journal Chemical Communications, researchers tested the effectiveness of the smell or perfume as an insect repellent and discovered that the smells repelled insects—but in one case the molecules actually attracted them. The findings raise the possibility of being able to develop a trap-and-kill device.Better smells
“We know that many organisms use smell to interact with members of the same species and to locate hosts of food or to avoid attack from parasites,” says Rudolf Allemann, professor of chemistry at Cardiff University.
“However, the difficulty is that scientifically smell molecules are often extremely volatile, chemically unstable, and expensive to recreate. This means that, until now, progress has been extremely slow in recreating smells that are similar to the original.
“Through the power of novel biochemical techniques we have been able to make insect repellent smell molecules which are structurally different but functionally similar to the original,” he says.
“This is a breakthrough in rational design of smells and provides a novel way of producing a smell with different properties and potentially better ones than the original but at the same time preserving the original activity,” says Professor John Pickett from Rothamsted Research.
“By using alternative substrates for the enzymes involved in the ligand biosynthesis (biosynthesis of the smell) we can create the appropriate chemical space to reproduce, with a different molecular structure, the activity of the original smell.”
The team hopes that their research will provide a new way of designing and developing small smell molecules which would otherwise be too difficult to produce by usual scientific and commercial methods.
Source: Cardiff University
New research turns past theories about the Tibetan Plateau upside down. By condensing a period lasting millions of years, a simulation offers a new account of how the southeastern end’s high-lying valleys formed.
Located in Tibet and the Chinese province of Yunnan, the southeast Tibetan plateau is an extraordinary mountainous region. The high peaks are rugged and steep, reaching more than 7,000 meters (nearly 23,000 feet) in height.
Major rivers, including the Yangtze, Mekong, and Salween, have significantly eroded the bedrock. Nestled amongst the mountain ridges are beautiful high valleys with gentle hills, large lakes, and meandering rivers.Did the lowlands rise up?
It was previously assumed that these high valleys were relict landscapes, originating in the lowlands that lie at the foot of the Himalayas. According to this model, the continental collision and corresponding rise of the Tibetan plateau led to parts of the lowlands being lifted to their current elevation of between 2,000 and 5,000 meters (roughly 6,500 to 16,400 feet) above sea level. The characteristics of the landscape are therefore thought to be preserved there.
Using a new computer model, geologists at ETH Zurich have now simulated the formation of these high-lying valleys. The simulations are a type of time-lapse that lets the geologists track the geological processes of the past 50 million years. As they report in Nature, the results of the study led them to a completely different conclusion.
In their simulation, they were not able to track preserved lowlands being uplifted into the highlands. In contrast, they demonstrated that the gentle, high valleys developed in place (in situ). This is due to a process of disruption of the river network induced by tectonic movement.High-up valleys
In this simulation, the northeastern corner of the Indian Plate—today located in the province of Yunnan—pushes against the Asian plate and strongly “indents” the eastern Himalaya and Tibetan plateau. This results in large strain, accommodated by a succession of earthquakes that deform the Earth’s surface along faults that cut across the landscape.
The deformation forces rivers to change course, shifting flow direction, and in some cases, disrupting watercourses such that rivers lose part of their catchment area.
If, for example, a tributary disappears, the remaining river will carry less water. The river’s capacity for erosion, or for the transport of sediment, goes down, resulting in a lower steepness to the river. The rate of erosion also goes down along the adjacent slopes, because the river undercuts hillslopes less aggressively. As a result, they erode less quickly, with hillslopes being less steep and landslides becoming a less frequent occurrence.
Over the course of millions of years, this process leads to the formation of landscapes in the mountains that resemble those in the lowlands at lower elevation.
For Professor Sean Willett, coauthor of the study, the results are clear: “Our simulations explicitly show that the high-lying valleys must have developed in place. They are not remnants of former lowlands.”
As for glaciers creating the gentle shapes, Willett rules this possibility out as well. According to Willett, glaciation in the study region was limited to the highest summits. Although he states that this may have assisted in the erosion of high mountains and slopes, he concludes that rivers are solely responsible for the formation of the valleys.The Alps, too?
The results of the study do not apply exclusively to the eastern Himalayas; they offer revealing insights into other mountainous regions as well. Willett highlights the Engadin as an example from the Swiss Alps.
The valley floor is high above sea level, but is flatter than would be expected from a purely glacial valley. There is much to suggest that the high-lying valley in the Engadin could have been formed in place at high altitudes, just like those in the highlands of southeastern Tibet.
“The Maloja Pass is not a normal pass because there is no incline, no steep headwall, on the Engadin side,” Willett says. “It is as if the head of the valley has been cut off.”
Whether the Engadin and other high-lying alpine valleys actually conform to the geoscientists’ current model will be clarified in an upcoming study.
Source: ETH Zurich
A new study is the first to show that common inherited genetic variants influence life expectancy in patients with colorectal cancer (CRC).
A team from Cardiff University’s School of Medicine analyzed over 7,600 patients with CRC from 14 different centers across the UK and the US. They found that a genetic variant in the gene CDH1 (encoding E-cadherin) was strongly linked to survival.
Having combined data of both inherited genetic variations and variations found within the cancers, the scientists believe that the resulting information will play a crucial role in managing patient survival.
“Our findings show that patients carrying a specific genetic variant, which is found in about 8 percent of patients, have worse survival, with a decrease in life expectancy of around four months in the advanced disease setting,” says study leader Professor Jeremy Cheadle.
“This work shows the potential use of genetic variants to help provide clinically useful information to patients suffering from colon cancer,” says Lee Campbell, science projects and research communications manager from Cancer Research Wales, which part-funded the study.
“Not only does this important piece of research allow clinicians to make more informed treatment decisions for individuals in future, but also has the capability to enhance existing screening or post-operative surveillance programs for this disease.”
“This represents a critical first step to improving colorectal cancer patient outcomes through a greater understanding of the influencing genetic factors,” adds Ian Lewis, director of research and policy at Tenovus Cancer Care.
The Bobby Moore Fund from Cancer Research UK, Tenovus Cancer Care, the Kidani Trust, Cancer Research Wales, and the National Institute for Social Care and Health Research Cancer Genetics Biomedical Research Unit (2011-2015) supported the work.
The findings are available in Clinical Cancer Research.
Source: Cardiff University
A new study shows neurons are more independent than previously believed. The findings may have implications for a range of neurological disorders, as well as our understanding of how nerve cells in the brain generate the energy they need to function.
“These findings suggest that we need to rethink the way we look at brain metabolism,” says Maiken Nedergaard, co-director of the University of Rochester Center for Translational Neuromedicine and lead author of the study. “Neurons, and not the brain’s support cells, are the primary consumers of glucose and this consumption appears to correlate with brain activity.”
The brain requires a tremendous amount of energy to do its job. While it only represents 2 percent of the body mass of the average adult human, the brain consumes an estimated 20 percent of body’s energy supply.
Consequently, unraveling precisely how the brain’s cells—specifically, neurons—generate energy has significant implications for not only the understanding of basic biology, but also for neurological diseases which may be linked to too little, or too much, metabolism in the brain.The ‘lactate shuttle hypothesis’
Our digestive system converts carbohydrates found in food into glucose, a sugar molecule that is the body’s main source of energy, which is then transported throughout the body via the blood system. Once inside cells, the mitochondria, which serve as tiny cellular power plants, combine these sugars with oxygen to generate energy.
Unlike the rest of the body, the brain maintains its own unique ecosystem. Scientists have long believed that a support cell found in the brain, called the astrocyte, played an intermediary role in the supplying neurons with energy. This theory is called the lactate shuttle hypothesis.
Scientists have speculated that the astrocytes are the brain’s primary consumer of glucose and, like a mother bird that helps her chicks digest food, these cells convert the molecules to another derivative (lactate) before it is passed along to the neurons. Lactate is a form of sugar molecule that is used by mitochondria for fuel.
“The problem with the lactate shuttle hypothesis is that by outsourcing lactate production to astrocytes, it places the neuron in a dangerous position,” says Nedergaard. “Why would neurons, the cell type that is most critical for our survival, be dependent upon another cell for its energy supply?”Neurons in action
The new research, which was conducted in both mice and human brain cells, was possible due to new imaging technologies called 2-photon microscopy that enable scientists to observe activity in the brain in real time.
Using a glucose analog, the researchers found that it was the neurons, and not the astrocytes, that directly take up more glucose in the brain. They also found that when stimulated and more active, the neurons increase consumption of glucose, and when the mice where anesthetized, there was less neuronal uptake of glucose. On the other hand, the uptake of glucose by astrocytes remained relatively constant regardless of brain activity.
On the cellular level, the researchers observed that the neurons were doing their own job of converting glucose to lactate and that an enzyme that plays a key role in the creation lactate, called hexokinase, was present in greater amounts in neurons compared to astrocytes.Stroke and Alzheimer’s
These findings have significant implications for understanding a host of diseases. The overproduction of lactate can result in lactic acidosis, which can damage nerve cells and cause confusion, delirium, and seizures. In stroke, lactate accumulation contributes to the loss of brain tissue and can impact recovery. Neuronal metabolism also plays an important role in conditions such as Alzheimer’s and other neurodegenerative diseases.
Recent research has shown that inhibiting the transport of lactate between cells can reduce seizure activity in mice. However, much of this prior work has assumed that lactate was produced by astrocytes and that neurons were passive bystanders.
The new study brings into question these assumptions by showing that neurons consume glucose directly and do not depend on astrocytic production and delivery of lactate.
“Understanding the precise and complex biological mechanisms of the brain is a critical first step in disease-based research,” says Nedergaard. “Any misconception about biological functions—such as metabolism—will ultimately impact how scientists form hypothesize and analyze their findings. If we are looking in the wrong place, we won’t be able to find the right answers.”
Additional authors contributed from the University of Rochester and the University of Copenhagen. The National Institute of Neurological Disorders and Stroke and the Novo Nordisk Foundation supported the work, which appears in Nature Communications.
Source: University of Rochester
Where children live early in their lives can have a lasting impact on their ability to handle stress later, a new study with children in Romanian orphanages shows.
The research, believed to be the first to identify a sensitive period during early life when children’s stress response systems are particularly likely to be influenced by where they are cared for, also shows that the negative effects of a deprived environment can be made less painful by changing it—but only if that happens before the child turns 2.
“The early environment has a very strong impact on how the stress response system in the body develops,” says lead author Katie McLaughlin, assistant professor of psychology at University of Washington. “But even kids exposed to a very extreme negative environment who are placed into a supportive family can overcome those effects in the long term.”‘Extreme form of early neglect’
Published in the Proceedings of the National Academy of Sciences, the study focuses on children who spent the first years of their lives in Romanian orphanages and others who were removed from orphanages and placed in foster care. The institutionalized children had blunted stress system responses—for example, less heart rate acceleration and blood pressure increases during stressful tasks and lower production of cortisol, the primary hormone responsible for stress response.
By comparison, children who were removed from the institutions and placed with foster parents before the age of 24 months had stress system responses similar to those of children being raised by families in the community.
“Institutionalization is an extreme form of early neglect,” McLaughlin says. “Placing kids into a supportive environment where they have sensitive, responsive parents, even if they were neglected for a period of time early in life, has a lasting, meaningful effect.”
The research is part of the Bucharest Early Intervention Project, launched in 2000 to study the effects of institutionalization on brain and behavior development among some of the thousands of Romanian children placed in orphanages during dictator Nicolae Ceausescu’s reign.Flight or fight response
Researchers tested 138 children at about age 12 from three groups: those who had spent several years in institutions, others who were removed from institutions and placed into high-quality foster care, and children raised in families living in areas near the institutions.
The children placed into foster care were moved at between six months and three years of age. Those left in institutions remained there for varying amounts of time before eventually being adopted, reunited with their biological parents, or placed in government foster care after policies around institutionalization changed in Romania.
During the tests, children were asked to perform potentially stressful tasks including delivering a speech before teachers, receiving social feedback from other children, and playing a game that broke partway through. Researchers measured the children’s heart rate, blood pressure, and several other markers including cortisol.
The children raised in institutions showed blunted responses in the sympathetic nervous system, associated with the flight or fight response, and in the HPA axis, which regulates cortisol. A dulled stress response system is linked to health problems including chronic fatigue, pain syndrome, and autoimmune conditions, as well as aggression and behavioral problems.Physical and mental health
“Together, the patterns of blunted stress reactivity among children who remained in institutional care might lead to heightened risk for multiple physical and mental health problems,” the researchers write.
It’s difficult to say for certain why the children’s stress response systems were blunted. It’s possible that since they endured such extreme stress early in life, the tasks the researchers put them through were relatively benign in comparison and thus did not evoke a strong response.
More significantly, McLaughlin says, their stress response systems might have been initially hyperactive at earlier points in development, then adapted to high levels of stress hormones by reducing the number of receptors in the brain that stress hormones bind to.
“If we’d been able to measure their stress systems early in life, we would expect to find very high levels of stress hormones and stress reactivity.”
The study also found that children raised in the orphanages had thinner brain tissue in areas linked to impulse control and attention, and less gray matter overall.
The children involved in the study are now about 16 years old, and researchers next plan to investigate whether puberty has an impact on their stress responses. It could have a positive effect, McLaughlin says, since puberty might represent another sensitive period when stress response systems are particularly tuned to environmental inputs.
“It’s possible that the environment during that period could reverse the impacts of early adversity on the system,” she says.
Researchers from Harvard Medical School, Tulane University, and University of Maryland are coauthors of the study.
Source: University of Washington
A long harsh winter is over and spring has arrived. But allergies, as well as flowers, are blooming.
May is National Asthma and Allergy Awareness Month. Tao Zheng, chief of the allergy and immunology section at Yale University School of Medicine discussed what to expect from this allergy season and new advances in allergy treatments with university writer Ziba Kashef.What do people need to know about the 2015 spring allergy season? How does it compare to previous years?
In spring and summer, many people are vulnerable to tree pollen and grass allergies. Trees and flowers all seem to be blooming at once, and that means a sudden burst of different types of pollen at the same time.
We are predicting that this allergy season may be one of the worst in years. In Connecticut and the Northeast, beginning in February and lasting until June, several types of trees—particularly birch, maple/box elder, oak, juniper/cedar, and pine trees—produce pollen that can trigger allergy symptoms.How are seasonal allergies typically treated?
Allergic respiratory diseases—also known as allergic rhinitis and sometimes referred to as hay fever, and allergic asthma—are inflammatory diseases that cause sneezing, itchy/watery eyes, itchy/runny nose, and congestion, cough and wheezing.
For millions of sufferers, antihistamines and nasal corticosteroid medications provide temporary relief of symptoms. For others, allergy shots (allergen subcutaneous immunotherapy [SCIT]) are a treatment option that can provide long-term relief.A new oral allergy immunotherapy was approved by the FDA last year. How does it work?
SCIT has proven efficacy in treating allergic rhinoconjunctivitis and asthma, but it requires regular injections at a clinician’s office, typically over a period of three to five years. Another form of allergy immunotherapy was recently approved in the United States called sublingual immunotherapy (SLIT) allergy tablets.
Rather than shots, SLIT involves administering the allergens in a liquid or tablet form under the tongue generally on a daily basis. SLIT is similar to SCIT in terms of effectiveness, and both have been shown to provide long-term improvement even after the treatment has ended.
However, the treatment is only effective for the allergen contained in SCIT or allergy tablets. If an individual is allergic to ragweed and trees, the ragweed tablets/shots would only help control symptoms during ragweed season. Allergy tablets have a more favorable safety profile than SCIT, which is why they do not need to be given in a medical setting after the first dose.
The primary side effects of allergy tablets are local reactions such as itching or burning of the mouth or lips and, less commonly, gastrointestinal symptoms. These reactions usually stop after a few days or a week.What else can people do to survive allergy season — lifestyle changes or home remedies?
Spring is the time for warm weather and outdoor actives. To minimize pollen exposure:
- Keep windows closed during pollen season, especially during the day
- Stay indoors during midday and afternoon hours when pollen counts are highest
- Take a shower, wash hair, and change clothing after working or playing outdoors to remove allergens that collect on clothes and hair
- Wear a mask when doing outdoor chores like mowing the lawn
If you need help to prevent or control your allergies, talk to your doctor about seeing an allergist who can discuss the best treatment options available for you, which may include oral medications, topical nasal sprays, eye drops, or allergy shots and allergy tablets.
Source: Yale University
Growing grain in clear plastic pots may be a way to counter drought, which is expected to become more severe and more frequent worldwide.
The inexpensive and simple technique, which will would allow scientists to see through the pot wall and view the roots of the plant, could lead to grain crops such as wheat that are better adapted to drought conditions.
“Crop improvement for drought tolerance is a priority for feeding the growing human population,” says Cecile Richard, a PhD candidate at the Queensland Alliance for Agriculture and Food Innovation at University of Queensland.Better roots
“Roots allow plants to access water stored in the soil and are crucial for reliable crop production. Even when rain is scarce, water is often still available deep in the soil. By increasing the length and number of roots, we can boost access to water and safeguard the crop.”
The new method, described in the journal Plant Methods, will allow scientists to combine favorable root characteristics in new wheat varieties that could improve the plant’s access to water—resulting in better yield stability and productivity under drought conditions.
“The roots are growing around the wall of the clear pot and it’s possible to measure different characteristics such as the angle and number of roots, based on images captured at ten days after sowing,” she says.
“These characteristics reflect the root growth pattern displayed by wheat in the field, which is important for the plant to access water.”
Previous techniques used for measuring roots had been time consuming and expensive, Richard says.
“Planting wheat seeds around the rim of a clear-plastic pot to measure root characteristics has never been tried before. This method is easy, cheap, and rapid.”
The technique could help boost global wheat production and speed-up selective breeding for drought-tolerant wheat strains.
“We hope to use the clear-pot technique to rapidly discover the genes responsible for these important root characteristics,” Richard says.
The research, funded in part by the Grains Research and Development Corporation of Australia.
Source: University of Queensland
Helping teenagers deal with online risks, rather than trying to keep them offline, may be a more practical and effective way to keep them safe.
In a study, more resilient teens were less likely to suffer negative effects even if they were frequently online, says Haiyan Jia, postdoctoral scholar in information sciences and technology at Penn State.
“Internet exposure does not necessarily lead to negative effects, which means it’s okay to go online, but the key seems to be learning how to cope with the stress of the experience and knowing how to reduce the chances of being exposed to online risk,” Jia says.
The researchers say that previous research tended to focus on limiting online use as a way to minimize risks of privacy violations and traumatic online experiences, such as becoming the victim of cyber-bullying and viewing unwanted sexual materials.
However, with online technologies becoming more ubiquitous and a greater part of teens’ social and educational lives, abstinence may actually be less reliable and more harmful.
“Let’s assume that teens are going to deal with some online risk,” says Pamela Wisniewski, a postdoctoral scholar in information sciences and technology. “If risk is going to be present, we want to make sure to minimize the negative outcomes and make sure the teens are equipped to handle these experiences.”
Not allowing teens to use the internet has its own risks, she adds.
“As much as there are negatives associated with online use, there are also a lot of benefits to using online technologies,” says Wisniewski. “Parents should be aware that restricting online use completely could hurt their children educationally and socially.”Learn to be resilient
Both parents and technology companies may be able to help teens become more resilient, according to the researchers, who released their findings at the Computer Human Interaction conference in Seoul, South Korea.
Teens who are exposed to minimal risks can, over time, develop coping strategies and be more resilient as new, more risky situations arise.
“For example, let’s say a teen girl is surfing online and one of her online friends asks for a nude photo,” says Jia. “If a teen doesn’t know how to deal with this, she might just succumb to the pressure and send the photo, and then suffer all kinds of stress and anxiety as a result, but if she builds up her resilience, she knows how to deal with the situation, she knows how to say no and prevent exposing herself to this risk.”
The researchers suggest that technology companies that create cyber-security software could design software solutions that alert teens to risky behavior in order to avoid relying solely on parental monitoring software that restricts certain websites and social media sites.
“You don’t want to parent strictly based on fear, you want to parent based on empowerment,” says Wisniewski.
The researchers examined the responses of 75 teens, including 46 girls and 29 boys between 13 and 17 years old, to questions about how they used the internet and what problems, if any, they encountered.
To determine how excessive exposure to the internet influenced negative outcomes, they analyzed teens who were at risk of internet addiction. While there was a significant correlation between internet addiction and negative effects, more resilient teens were less likely to be suffer negative consequences from extreme online exposure, according to the researchers.
The National Science Foundation supported this work.
Source: Penn State
The concept of peer review is central to National Institutes of Health (NIH) funding and to science itself—journals choose articles for publication based on fellow scientists’ scrutiny. The idea is to weed out weak research and ensure that only the strongest science goes forward. But does it work?
The NIH is the major funder of biomedical research in the United States, distributing some $30 billion to scientists each year. To decide who gets money, the NIH subjects grant proposals to a rigorous system of peer review: each proposal goes to a committee of scientists familiar with the area of research. The committee, called a study section, reviews the proposal and gives it a score. The NIH funds proposals in order of their score until the budget for that year runs out.
To find out if the peer review system is effective, Leila Agha, an assistant professor at Boston University’s Questrom School of Business in the markets, public policy, and law department, started to investigate.
In a report published in Science, Agha and her coauthor, Harvard Business School’s Danielle Li, picked peer review apart, trying to see if the system really rewarded the best proposals, or if it simply favored “rock star” scientists from big-name institutions.
Agha recently discussed her findings with Boston University writer Barbara Moran:Why did you choose to study NIH peer review?
There has been considerable debate in recent years about how successfully the NIH is allocating their resources, particularly as the budget has become tighter and funding has become more and more competitive.What is the debate about?
There have been a few critiques of peer review. One says maybe peer review can weed out weak proposals, but it’s not very good at identifying the really path-breaking research—maybe it’s unintentionally weeding out those risky projects that have the potential to really change the field of research.I’ve heard a lot of scientists say that—that the more conservative proposals get funded.
That’s right. So that was one issue we wanted to investigate. Another critique that you sometimes hear is that the review committee is not reading the details of the proposals to figure out which are the most promising. The concern is that proposals with star scientists or elite institutions associated with them will get funded. It doesn’t have to do with the content of the science; it has to do with how important or famous the person already is.That’s another common complaint among scientists.
Exactly, particularly among early-career investigators. We tracked 130,000 grants funded by the NIH between 1980 and 2008, and looked at the number of publications that came from that research, the number of citations to those publications, and whether there were follow-on patents.
We matched this data to a lot of information about each investigator, like: What is their institutional affiliation? How many citations and publications have they had in the past? How many of those have been “big hits” (highly cited)? Have they been successful at getting NIH grants in the past? How experienced are they? When did they receive their MD or PhD? And by controlling for all those factors, we’re able to refine our measure of what committees are doing.One of the things you controlled for actually has a name, the Matthew effect. What is that?
It’s a sociologic idea that the rich get richer and the poor get poorer; it’s named for a Biblical reference. In this context, it is not about money per se, but this idea that someone who’s already famous might receive a better score on their grant application and garner a lot of citations even if their research isn’t necessarily better. We wanted to investigate whether committees were generating insight about the quality of the proposed research rather than solely rewarding past successes.It seems so difficult to control for all those things: the prestige of the person, the status of the institution.
We start out not controlling for anything, but ask simply: What’s the relationship between the score and the research outcome?
And then we say, okay, let’s statistically control for the field of study, because different fields have very different citation patterns. So that’s straightforward. And then we say, okay, let’s control for the publication history of the principal investigator—how well published was he or she in the past, before he submitted the grant.
Then we add in controls for the career characteristics, and controls for grantsmanship skill—did he or she have NIH grants in the past—and then we control for the type of institution—how elite it is. And what we show is that as we add these successive controls, it doesn’t attenuate the relationship very much between scores and grant outcomes.So in lay terms, that means peer review seems to work?
Peer reviewers seem to be contributing expertise that rewards high-impact science, and this insight couldn’t be predicted solely from the investigator’s publication history, grant history, or other quantitative measures of past performance. So I’m not trying to say it’s the best of all possible methods, but I think it does refute some of the more stark critiques that, for example, reviewers completely fail to reward high-impact research, or that reviewers are just reacting to, say, the institution that the PI is at or how successful he has been at publishing in the past.They say democracy is the worst form of government until you look at the other ones. People complain about peer review, but have there been alternate ideas?
Peer review is really the central model, which is why it’s so important that we understand how well it works. There are slight variations, but the fundamental idea of having a peer review committee allocate funding is not only done by the NIH but also the NSF and the European Research Council. It’s really the fundamental mechanism through which public money gets funneled to external researchers.It’s also the fundamental way science is published, in general.
You’re exactly right. There are things that are special about the structure of NIH peer review committees, but I think that it can give us insight into what peer review can and can’t do. And I think that there actually has been relatively little research on it.
I think there has been real concern, as it becomes more competitive to get funding, that valuable research is being weeded out. And certainly I’m not arguing that doesn’t occur. However, it does seem that even among very well-scored grants, it’s still true that the grants in the top 1 percent or 2 percent are more likely to produce a high number of citations or “hit” publications than grants that are just slightly lower scored—for example, scored in the top 10 percent. And so even among the set of very, very strong applications, the committees are still able to discern some dimension of research potential that’s predictive of publication and patenting outcomes. So these results are interesting and encouraging for how we evaluate scientific work.What do you hope will come out of this research?
It’s valuable to know that the process is, on some level, successful at identifying promising proposals. What we are not saying is that the peer review committees are in some sense infallible, or that they never make mistakes, or that this is the best possible allocation mechanism. There’s no other system that we were able to investigate and compare it to. It’s encouraging that peer review generates insight about research potential, but that doesn’t suggest it couldn’t be improved.
Source: Boston University
Discussing sexual history with a doctor can be an uncomfortable experience. But for many transgender people, the conversation never takes place.
Social stigma and a lack of affordable health care keep many from pursuing needed health care.
“There is evidence that health care providers do tend to be judgmental, and it’s unwelcoming,” says Adrian Juarez, a public health nurse and assistant professor in the University at Buffalo School of Nursing. “People will refrain from going to health care providers if they have to deal with stigma and discrimination.”Transgender women of color
The findings are troubling considering nearly a third of transgender Americans are HIV-positive, according to a 2009 report from the National Institutes of Health. Transgender women of color are at even greater risk for HIV infection. More than 56 percent of black transgender women are HIV-positive.
“We don’t know enough about communities of color,” says Juarez. “Most trans research is done on the Euro-American population. While we have made some inroads at looking at African Americans, there is almost nothing coming out for Hispanic communities.”
The study examined HIV-risk data from the New York State Department of Health AIDS Institute, Evergreen Health Services in Buffalo, and International AIDS Empowerment in El Paso, Texas, a largely Latino community. Juarez also conducted interviews with members of the Buffalo and El Paso transgender communities.Added risk factors
Apart from the stigma, another factor keeping transgender patients away from doctors is the inability to afford care. According to a 2011 report from the National Center for Transgender Equality, transgender people were four times more likely than the general population to live in extreme poverty—with a household income of less than $10,000 per year—and more than twice as likely to be homeless.
Finding work is yet another challenge. According to the National Center for Transgender Equality, 90 percent of more than 6,000 transgender people surveyed nationwide reported being the target of harassment, mistreatment, and discrimination at work.
“Imagine someone applies for a job and the employer isn’t accepting of their identity. They’re not going to get the job,” says Juarez. “But as human beings, we need to eat and shelter ourselves. So they turn to sex work. The risk factors just add on.”
In addition to gender discrimination, if a transgender patient of color does meet with a doctor for care, they also face the social stigma associated with being HIV-positive, where the victim is often blamed or judged for their actions.
Further, some health care providers are ill informed on how to treat transgender patients.
“It puzzles me how doctors will still refer to trans individuals by their biological name. That’s their identity,” says Juarez. But in health care treatment, the line between biological and identifying gender are not always clear.
Transgender women still require prostate screenings, and transgender men need a Pap smear, although a cautious health care provider may not offer the testing to avoid suggesting treatment that goes against the patient’s identity.
The funded in part by a Junior Investigator Award from the American Public Health Association.
Source: University at Buffalo
A recording taken directly from the brain of a 50-year-old man with tinnitus is giving scientists insight into which networks are responsible for the often debilitating condition.
About one in five people experiences tinnitus, the perception of a sound—often described as ringing—that isn’t really there. A new study reveals just how different tinnitus is from normal representations of sounds in the brain.
“Perhaps the most remarkable finding was that activity directly linked to tinnitus was very extensive and spanned a large proportion of the part of the brain we measured from,” says study co-leader Will Sedley of Newcastle University in the United Kingdom. “In contrast, the brain responses to a sound we played that mimicked [the subject’s] tinnitus were localized to just a tiny area.”Not like normal sound
“This has profound implications for the understanding and treatment of tinnitus, as we now know it is not encoded like normal sound, and may not be treatable by just targeting a localized part of the hearing system,” says study co-leader Phillip Gander, postdoctoral research scholar in the neurosurgery department at the University of Iowa.
Gander and Sedley are members of the Human Brain Research Laboratory, which uses direct recordings of neural activity from inside humans’ brains to investigate sensory, perceptual, and cognitive processes related to hearing, speech, language, and emotion.
Experiments are possible because patients who require invasive brain mapping in preparation for epilepsy surgery also volunteer to participate in research studies. In this case, the patient was a 50-year-old man who also happened to have a typical pattern of tinnitus, including ringing in both ears, in association with hearing loss.
“It is such a rarity that a person requiring invasive electrode monitoring for epilepsy also has tinnitus that we aim to study every such person if they are willing,” Gander says. About 15 epilepsy surgery patients participate in the research each year.‘Fill the gap’
“We are putting a recording platform into the patient’s brain for clinical purposes and we can modify it without changing the risk of the surgery. This allows us to understand functions in the brain in a way that is impossible to do with any other approach,” Howard says.
In the new study, published in Current Biology, researchers contrasted brain activity during periods when tinnitus was relatively stronger and weaker. They found the expected tinnitus-linked brain activity, but discovered the unusual activity extended far beyond circumscribed auditory cortical regions to encompass almost all of the auditory cortex, along with other parts of the brain.
The findings add to the understanding of tinnitus and helps to explain why treatment has proven to be such a challenge, Gander says.
“The sheer amount of the brain across which the tinnitus network is present suggests that tinnitus may not simply ‘fill in the gap’ left by hearing damage, but also actively infiltrates beyond this into wider brain systems.”
These new insights may advance treatments such as neurofeedback, where patients learn to control their “brainwaves,” or electromagnetic brain stimulation, researchers say. A better understanding of the brain patterns associated with tinnitus may also help point toward new pharmacological approaches to treatment.
The National Institutes of Health and the Wellcome Trust and Medical Research Council in the UK supported the work.
Source: University of Iowa
New research with flies suggests that a good night’s sleep might be vital for retaining our capacity to learn and remember.
Leonie Kirszenblat, a PhD student at University of Queensland, says the research shows increased sleep temporarily treated flies with learning defects, leading to a 20 percent improvement on a memory task.
The researchers used different genetic or pharmacological methods to induce the flies’ sleep, to prove that it was indeed sleep that treated the flies, rather than any specific drug or genetic pathway.
“One way we test and measure the flies’ memory is to use a visual learning task in which they must learn to avoid light that they are normally drawn to, by associating it with punishment,” says Associate Professor Bruno van Swinderen of the university’s Queensland Brain Institute.
“We test and measure their sleep by probing their responsiveness to different vibration intensities throughout successive days and nights, using a sophisticated computer interface we call DART: Drosophila ARousal Tracking.”Mysterious sleep
The study results reinforce the therapeutic benefits of sleep, even if the different functions of sleep remain mysterious, says Kirszenblat.
“A lot of human disorders result in sleep problems. For instance, many Alzheimer’s disease patients report problems sleeping,” she says. “But in humans, it is difficult to determine causality: does bad sleep lead to cognitive disorders, or do these disorders cause bad sleep?”
The study used strains of flies with severe learning defects, or flies with memory problems that develop as they age.
“We forced them all to sleep for two days, and afterwards they all became normal learners,” says Kirszenblat.
“For example, we tested flies with a mutation in a gene called presenilin, which has been linked to early-onset Alzheimer’s disease, and we put the flies to sleep by activating GABA-A receptors in their brain—which humans also have.
“So it’s possible that simply by finding effective methods of promoting natural sleep, perhaps we will see some improvement in patients’ conditions.”
Humans and flies share most genes that are important for memory, leading the researchers to conclude that the work could lead to discoveries about improving memory in humans.
“The next step is to understand the actual mechanism that improves memory after sleep,” says Kirszenblat. “If we could understand how sleep improves memory in the fly brain, perhaps these mechanisms could be tweaked to improve memory in humans as well.”
The research, led by Washington University, appears in Current Biology.
Source: University of Queensland
Scientists have discovered that some of the potent toxin in Botox can escape and travel into the central nervous system.
Botox—also known as Botulinum neurotoxin serotype A—is best known for its ability to smooth wrinkles.
Derived from naturally occurring sources in the environment, Botox has also been extremely useful for the treatment of over-active muscles and spasticity as it promotes local and long-term paralysis.Safe to use
“The discovery that some of the injected toxin can travel through our nerves is worrying, considering the extreme potency of the toxin,” says Professor Frederic Meunier, laboratory leader of the Queensland Brain Institute at University of Queensland.
“However, to this day no unwanted effect attributed to such transport has been reported, suggesting that Botox is safe to use.”
“While no side effects of using Botox medically have been found yet, finding out how this highly active toxin travels to the central nervous system is vital because this pathway is also hijacked by other pathogens such as West Nile or rabies viruses. A detailed understanding of this pathway is likely to lead to new treatments for some of these diseases.”
Most of the toxin is transported to a cellular dump, where it should degrade after reaching the central nervous system, says Tong Wang, a postdoctoral research fellow in Meunier’s lab.
“For the first time, we’ve been able to visualize single molecules of Botulinum toxin traveling at high speed through our nerves,” Wang says.
“We found that some of the active toxins manage to escape this route and intoxicate neighboring cells, so we need to investigate this further and find out how.”
The findings appear in the Journal of Neuroscience.
Source: University of Queensland
Scientists are working to create nanoscale silver clusters with unique fluorescent properties, which are important for a variety of sensing applications including biomedical imaging.
In recent experiments, the researchers positioned silver clusters at programmed sites on a nanoscale breadboard, a construction base for prototyping of photonics and electronics. “Our ‘breadboard’ is a DNA nanotube with spaces programmed 7 nanometers apart,” says lead author Stacy Copp, a graduate student in the physics department at the University of California, Santa Barbara.
“Due to the strong interactions between DNA and metal atoms, it’s quite challenging to design DNA breadboards that keep their desired structure when these new interactions are introduced,” says Beth Gwinn, a professor in the physics department.
“Stacy’s work has shown that not only can the breadboard keep its shape when silver clusters are present, it can also position arrays of many hundreds of clusters containing identical numbers of silver atoms—a remarkable degree of control that is promising for realizing new types of nanoscale photonics.”
The results of this novel form of DNA nanotechnology address the difficulty of achieving uniform particle sizes and shapes. “In order to make photonic arrays using a self-assembly process, you have to be able to program the positions of the clusters you are putting on the array,” Copp explains. “This paper is the first demonstration of this for silver clusters.”Tuning the color
The colors of the clusters are largely determined by the DNA sequence that wraps around them and controls their size. To create a positionable silver cluster with DNA-programmed color, the researchers engineered a piece of DNA with two parts: one that wraps around the cluster and the other that attaches to the DNA nanotube. “Sticking out of the nanotube are short DNA strands that act as docking stations for the silver clusters’ host strands,” Copp explains.
The research group’s team of graduate and undergraduate researchers is able to tune the silver clusters to fluoresce in a wide range of colors, from blue-green all the way to the infrared—an important achievement because tissues have windows of high transparency in the infrared. According to Copp, biologists are always looking for better dye molecules or other infrared-emitting objects to use for imaging through a tissue.
“People are already using similar silver cluster technologies to sense mercury ions, small pieces of DNA that are important for human diseases, and a number of other biochemical molecules,” Copp says. “But there’s a lot more you can learn by putting the silver clusters on a breadboard instead of doing experiments in a test tube. You get more information if you can see an array of different molecules all at the same time.”Silver and DNA
The modular design presented in this research means that its step-by-step process can be easily generalized to silver clusters of different sizes and to many types of DNA scaffolds. The paper walks readers through the process of creating the DNA that stabilizes silver clusters. This newly outlined protocol offers investigators a new degree of control and flexibility in the rapidly expanding field of nanophotonics.
The overarching theme of Copp’s research is to understand how DNA controls the size and shape of the silver clusters themselves and then figure out how to use the fact that these silver clusters are stabilized by DNA in order to build nanoscale arrays.
“It’s challenging because we don’t really understand the interactions between silver and DNA just by itself,” Copp says. “So part of what I’ve been doing is using big datasets to create a bank of working sequences that we’ve published so other scientists can use them. We want to give researchers tools to design these types of structures intelligently instead of just having to guess.”
The paper’s acknowledgements include a dedication to “those students who lost their lives in the Isla Vista tragedy and to the courage of the first responders, whose selfless actions saved many lives.”
The research appears in ACS Nano.
Source: UC Santa Barbara
A more intellectually demanding job may be the key to living longer after developing young-onset dementia, say researchers.
Degeneration of the frontal and temporal parts of the brain leads to a common form of dementia affecting people under the age of 65. It results in changes in personality and behavior and problems with language, but does not affect memory.
“[Our] study suggests that having a higher occupational level protects the brain from some of the effects of this disease, allowing people to live longer after developing the disease,” says Lauren Massimo, a postdoctoral fellow at Penn State College of Nursing.
Previous research has suggested that experiences such as education, occupation, and mental engagement help a person develop cognitive strategies and neural connections throughout his or her life.
“People with frontotemporal dementia typically live six to ten years after the symptoms emerge, but little has been known about what factors contribute to this range,” says Massimo.Job levels
The researchers studied the effects of education and occupation on survival rates in patients with frontotemporal dementia or with Alzheimer’s disease, and report their results online in the journal Neurology.
Massimo and colleagues reviewed the medical charts of 83 people who had an autopsy after death to confirm the diagnosis of either frontotemporal dementia or Alzheimer’s disease. They also had information about patients’ primary occupations.
Occupations were ranked by US Census categories, with jobs such as factory worker and service worker in the lowest level, trade workers and sales people in the next level, and professional and technical workers—such as lawyers and engineers—in the highest level.
Researchers determined onset of symptoms by the earliest report from family members of persistently abnormal behavior. Survival was defined as from the time symptoms began until death.Up to 3 years longer
The 34 people autopsied with frontotemporal dementia had an average survival time of about seven years. The people with more challenging jobs were more likely to have longer survival times than those with less challenging jobs.
People in the highest occupation level survived an average of 116 months, while people in the lower occupation group survived an average of 72 months, suggesting that individuals who had been in the professional workforce may live up to three years longer.
The study found that occupational level was not associated with longer survival for the people with Alzheimer’s disease dementia. The amount of education a person had did not affect the survival time in either disease.
Massimo is also a postdoctoral fellow at the University of Pennsylvania Frontotemporal Degeneration Center. Additional researchers from Penn State; Arbor Research Collaborative for Health in Ann Arbor, Michigan; and University of Pennsylvania collaborated on the study.
The US Public Health Service and the Wyncote Foundation supported this work.
Source: Penn State
Experiments with online dating profiles suggest independent women fare better in the search for love.
Many women grew up feeling they had to fundamentally change themselves in order to find love, says Professor Matthew Hornsey of University of Queensland’s School of Psychology.
“In particular, there is a belief among women that they need to be conformist in order to be attractive to men. Our research shows that there is a pervasive belief that men go for relatively conformist women, but all our data suggests the opposite.”
Hornsey and colleagues drew on popular dating apps and sites, presenting people with profiles of potential dates who were conformist or non-conformist in their dress, attitudes, and tastes. In each case, men preferred non-conformist women.
In other studies, people filled out personality questionnaires and rated how successful they had been in attracting dates.
Women who said they were more independent reported more romantic success.
Hornsey says that on the rare occasions where results varied for male and female participants, it was women who benefitted most from non-conformity.
He says the findings were overwhelmingly positive for women and busted some long-held myths.
“A cursory glance at early 20th-century books on etiquette, courting, and ‘properness’ all deliver an expectation that women should be subdued, modest, and agreeable,” he says. “But times have changed. Society now tells us that independence is a sign of integrity and strong character.
“The old gender stereotype—that men go for conformist, submissive women—has been slow to die. The consequence may be that women rein themselves in when dating, when they would be better served by just being themselves.”
The research appears in the Personality and Social Psychology Bulletin.
Source: University of Queensland
Healthy young adults who consumed drinks sweetened with high-fructose corn syrup for just two weeks showed increases in three key risk factors for cardiovascular disease.
The results from a recent study are the first to demonstrate a direct, dose-dependent relationship between the amount of added sugar consumed in sweetened beverages and increases in specific risk factors for cardiovascular disease.
The findings reinforce evidence from an earlier epidemiological study that showed the risk of death from cardiovascular disease—the leading cause of death in the United States and around the world—increases as the amount of added sugar consumed increases.
“These findings clearly indicate that humans are acutely sensitive to the harmful effects of excess dietary sugar over a broad range of consumption levels,” says Kimber Stanhope, a research scientist at the University of California, Davis, School of Veterinary Medicine.Even a little may be too much
For the study, published online in the American Journal of Clinical Nutrition, 85 participants, including men and women ranging in age from 18 to 40 years, were placed in four different groups.
During 15 days of the study, they consumed beverages sweetened with high-fructose corn syrup equivalent to 0 percent, 10 percent, 17.5 percent, or 25 percent of their total daily calorie requirements.
The 0-percent control group was given a sugar-free beverage sweetened with aspartame, an artificial sweetener.
At the beginning and end of the study, researchers used hourly blood draws to monitor the changes in the levels of lipoproteins, triglycerides, and uric acid—all known to be indicators of cardiovascular disease risk.
These risk factors increased as the dose of high-fructose corn syrup increased. Even the participants who consumed the 10-percent dose exhibited increased circulating concentrations of low-density lipoprotein cholesterol and triglyceride compared with their concentrations at the beginning of the study.
The researchers also found that most of the increases in lipid/lipoprotein risk factors for cardiovascular disease were greater in men than in women and were independent of body weight gain.
The findings underscore the need to extend the research using carefully controlled dietary intervention studies, aimed at determining what would be prudent levels for added sugar consumption, Stanhope says.
The National Institutes of Health, the National Center for Research Resources, the Roadmap for Medical Research, the National Institute of Child Health and Human Development, the National Institute of Aging, and the US Department of Agriculture funded the work.
Source: UC Davis