Future high school biology teachers need to prepare for questions about evolution—and that may mean talking about the intersection of faith and science.
For a recent study study, political scientists conducted a series of focus group meetings with biology students at four Pennsylvania institutions—three universities and a college.
They report that students from a Catholic college appeared to be more reflective when talking about issues of faith and science.
“We suspect these students are somewhat less anxious around discussions of faith and science that come up in biology classes,” says Michael Berkman, professor of political science at Penn State and director of the Center for American Political Responsiveness.
Students at religious colleges often receive instruction in theology and attend lectures that integrate discussions about faith, Berkman says.
While this may help ease anxiety if religious issues come up in class discussions and talks with concerned parents, it is ultimately knowledge of the science of evolution that will provide biology teachers with the confidence for effective science instruction, he adds.
“If you don’t have confidence in your own self-knowledge, especially in a controversial topic, your tendency is going to be to shy away from it, to avoid controversy, and to not really teach the subject,” says Berkman.Fostering doubt
Critics of evolution often take advantage of a teacher’s limited understanding of evolution to foster doubt in the science and make the science seem less settled than it actually is, according to Berkman, who worked with Eric Plutzer, professor of political science and academic director at the Survey Research Center. These critics need only a slight opening to sow that doubt, he adds.
“You don’t have to necessarily prove an alternate theory, you just have to shed sufficient doubt on the prevailing scientific consensus,” says Berkman. “This is not an original idea. A variety of people and groups use the strategy of enabling doubt, in terms of doubting evolution, or climate change, or even, in the past, with tobacco research.”
Although many religious denominations now accept the compatibility of religious faith and the science of evolution, students from the non-religious schools often revealed that they experienced tension between the two, according to the researchers, who released their findings in the March issue of the Annals of the American Academy of Political and Social Science.Prepared for tough questions?
Students who have not considered the religious implications of evolution may not be prepared for questions from skeptical parents and students when they become teachers.
“Some of these students felt that they would be supplied with sufficient lesson plans and pedagogical skills when they become teachers, so that they could overcome what they don’t quite understand now and answer the challenging questions that might come up,” says Berkman.
Incorporating faith into discussions about evolution at Catholic and other religious institutions may be easier than at public institutions because the fields of science and religion are much more separate at the latter, he adds.
In an earlier study, Berkman and his colleagues found that high school biology teachers play a critical role in forming public consensus about science. Denying evolution could, then, lead not just to doubts about evolution, but also to a broader misunderstanding of science in general, according to the researchers.
“Evolution is fundamental to biology, but more importantly we think that when you are communicating a skepticism about evolution you’re communicating a skepticism about science generally,” says Berkman.
The researchers conducted focus group sessions at a large research university, a medium-sized state-owned university, a historically black university, and a Catholic four-year college, all in Pennsylvania. A total of 35 students took part in the focus group sessions that lasted 50 to 65 minutes.
Berkman says these focus group sessions could help lay the groundwork for more extensive follow-up surveys and studies in the future.
The Penn State Center for the Study of Human Variation, Evolution, and Behavior and the Penn State College of Education’s Waterbury fund supported this work.
Source: Penn State
A parasite that can make humans and animals sick is colonizing several species of snails in Florida.
Scientists made the discovery after an orangutan being treated at the University of Florida died from eating snails that carried the parasite Angiostrongylus cantonensis, known as the rat lungworm.
Rat lungworm has been known to be established in snail populations in Hawaii, but until now has not been commonly seen in the continental United States.Plant transports
The findings, which show the parasite may now be established in South Florida, raise concerns about how it got there and the potential implications for both animal and human health, researchers say.
“Determining the geographic distribution of this parasite in Florida is important due to the hazards to human health,” says Heather Walden, assistant professor of parasitology at UF’s College of Veterinary Medicine and lead author of a new study published online in the Journal of Parasitology.
The rat lungworm, a nematode that can affect both animals and humans, uses the rat as a definitive host and gastropods, such as snails, as intermediate hosts.
Florida’s large horticultural industry makes the parasite’s presence in the state particularly disturbing because plant nurseries are one of its most important modes of transport.
“Most of the snails found to be intermediate hosts for this parasite in our study are invasive and some feed on or shelter on ornamental plants, which have the potential for distribution throughout Florida and in other areas of the United States,” Walden says.Snail mucus
The new research builds on a previous study, which reported that a 6-year-old orangutan treated in 2012 after exhibiting neurological symptoms, was infected with the rat lungworm. The animal had a history of eating snails.
In 2013, Walden and a colleague visited the Miami area to collect terrestrial snails from the orangutan’s infection site. They sorted snails by size, shape, and color and identified them by species.
The scientists collected mucus from all of the snails and analyzed specimens for the presence of nematodes. Additionally, rat fecal samples were collected from the original infection site and examined for nematodes.
Of five species of terrestrial snails tested, three tested positive for the rat lungworm. One species was the same as the orangutan had ingested, one is a known intermediate host, and the other had never previously been identified as an intermediate host. All of the rat fecal samples contained the nematode.
Walden and study coauthor John Slapcinsky, an invertebrate zoologist who specializes in the study of mollusks at the Florida Museum of Natural History, are now working to identify and process all of the snails collected in the project.
In addition to the danger to humans, rat lungworms can also affect dogs, horses, and birds.
“These species all get similar diseases,” Walden says. “So these findings are of interest not only to companion animal medicine but to human medicine as well.”
The parasite causes a rare and potentially fatal form of meningitis in people, according to the Centers for Disease Control and Prevention, but humans can’t become infected unless they eat an undercooked or raw snail, Walden says.
“Some animal species can harbor the infective larvae, like different crustaceans or frogs. As long as food is cooked and you wash your produce, you will most likely never ingest it.”
Source: University of Florida
The post Dangerous ‘rat lungworms’ show up in Florida snails appeared first on Futurity.
Ancient sediments from a coastal pond in Cape Cod, Massachusetts, show that enormous storms have battered the region for 2,000 years.
The hurricane strikes deposited a distinct layer of sand mobilized from the adjacent beach.
The analysis, published in the journal Earth’s Future, suggests some of the hurricanes would have dwarfed recent storms like Hurricane Sandy in 2012 that caused $65 billion in damages.Very stormy periods
The findings could offer clues about global warming and future storm intensity, says Peter van Hengstum, assistant professor of marine sciences at Texas A&M University at Galveston.
“These core sediments act much like a commercial bar code you might find on an item at the grocery store,” van Hengstum says. “We were able to ‘read’ the sediment core and found evidence of 35 hurricane strikes.
“Importantly, there are two periods of very intense storm activity in the Cape Cod area, from 150 to 1150, and again from 1400 to 1675, unlike anything we have observed during the instrumental record.”
The storms were likely more intense than almost any storm ever seen in the Cape Cod area, including Hurricane Bob in 1991 and an unnamed storm that hit the area in 1635 and caused storm surges of at least 20 feet, van Hengstum says.Category 4 storms
The sediments indicate there was also a period starting in about 1400 that lasted until 1675, when storm activity increased significantly.
An intense storm pounded the Northeast about every 40 years or so, and most of these would be classified at least as a Category 3 or Category 4 storm—storms that would totally devastate New England if they hit today. By comparison, Hurricane Sandy was a Category 1 storm with winds of 80 miles per hour when it made landfall.
“The period of time from 1400 to 1675 AD was particularly interesting because it coincides with previous evidence for warming in the upper Atlantic Ocean off the North Eastern Seaboard,” van Hengstum says.
“This period of elevated hurricane frequency and intensity perhaps provides a clue into future hurricane activity in our warming climate.”
From a coastal risk perspective, US emergency officials should consider a plan involving a major hurricane—at Category 3 or higher intensity—every 30 to 40 years instead of every 100 or 200 years as currently believed.
Researchers from University of Texas at Austin, Woods Hole Oceanographic Institution, University of North Carolina-Wilmington, University of Massachusetts-Amherst, and University contributed to the study that was funded in part by the Dalio Explore Fund and the National Science Foundation.
Source: Texas A&M University
The post Dozens of monster hurricanes hit Cape Cod in last 2,000 years appeared first on Futurity.
Patent trolls are much maligned, but they may have surprising benefits for investors and the innovation economy.
Stephen Haber, a political science professor at Stanford University, suggests that concerns about too much litigation involving patents are misguided.
A patent troll is a person or company that buys patents—without any intent to produce a product—and then enforces those patents against accused infringers in order to collect licensing fees. Some say the resulting litigation has driven up costs to innovators and consumers.
To the contrary, Haber says, his working paper, written with political science graduate student Seth Werfel, shows that trolls—also known as patent assertion entities, or PAEs—play a useful intermediary role between individual inventors and large manufacturers.
Their study focused on why inventors choose to sell their patents to PAEs rather than license their technologies directly to manufacturers. The asymmetry in financial resources between the inventor (small) and the manufacturer (large) is a key motive for doing so.
“A primary reason why individual patent holders sell to patent assertion entities is that they offer insurance and liquidity,” writes Haber.
In an interview, Haber says, “If there’s something like patent trolls that exist that are supposedly bad, but you observe a lot of them, you have to ask yourself, what role do they play in making a market work?”The cost of litigation
Haber and Werfel’s study was based on a survey experiment of Bay Area inventors and entrepreneurs. To test the hypothesis that financial constraints affected the decision of individuals to sell to PAEs, the researchers randomly varied the cost structure of litigation, with some subjects being told they had to choose between hiring a lawyer at an hourly rate or on a contingent fee basis.
For inventors, contingent fee litigation eliminates upfront costs as well as potential financial losses, Haber says. As a result, it may be seen as insurance for an inventor selling his patent to a PAE.
The researchers also surveyed the risk preferences and loss aversion of the participants. The assumption was that those more prone to avoiding financial losses—inventors more so than entrepreneurs—would be more likely to sell to a PAE.
The findings showed that those hiring contingency fee lawyers were 40 percent less likely to sell to a patent troll, while the effect for entrepreneurs was statistically insignificant.‘Inherent risk’
The results show that inventor demand for patent trolls is associated with perceived financial constraints, according to Haber, senior fellow at the Hoover Institution at Stanford University, where he directs Hoover’s working group on intellectual property.
Haber explains that the imbalance in financial resources between individual patent holders (inventors) and large manufacturers prevents those inventors from credibly threatening to litigate against infringement.
“First, individuals may not be able to cover the upfront costs associated with litigation. Second, unsuccessful litigation can result in legal fees so large as to bankrupt the individual. Therefore, PAEs offer a way for individual inventors to guarantee profits from their patents without having to engage in costly litigation,” Haber says.
Without patent trolls, Haber says, inventors would be more limited in the innovation ecosystem.
As he says, “It not like someone puts a gun to someone’s head and says, ‘Sell me your patent.'”
After all, investors face an inherent risk as soon as they file their patent—which describes the product—and certainly when they show it to a manufacturer. “Somebody else could copy it,” he notes.The price tag matters
Haber suggests that America’s patent system is the best in the world, and that policymakers should not rely on claims that patent trolls and lawsuits discourage innovation and commercialized technology.
“They should demand robust evidence that the current system is slowing down innovation. That evidence does not exist,” he writes in a prior article.
Haber explains that large corporations produce many patent-intensive products—like smartphones.
For example, a smartphone contains thousands of patented components—but the manufacturer may not own many of them. And so, it must negotiate for the right to use them. The less a manufacturer pays to use technology patented elsewhere, the higher the profit it can make. And the manufacturer will greatly benefit, of course, if it pays nothing at all, he adds.
While the number of patent lawsuits has increased by about 60 percent since 2000, as Haber acknowledges, the increase reflects a dynamic economy. In fact, today’s courts are in a process of clarifying intellectual property and contract rights during a period of “disruptive technology,” he says.
Finally, he says, robust innovation has brought a dramatic decline in prices for patent-intensive products—and that is the metric that American consumers care about, not the number of lawsuits between manufacturers.
For instance, since 1992, the quality-adjusted price of telephone equipment has fallen by 6.7 percent, televisions by 14.4 percent, and portable computers by 26.7 percent a year, according to Haber.
He says the role of patent trolls is more complex than many imagine. “It is often simplistically portrayed, and often from the point of view of a large manufacturer.”
Source: Stanford University
In an attempt to tackle a problem that has vexed physicists for decades, researchers turned to ultracold atoms, not supercomputers.
Nearly 30 years have passed since physicists discovered that electrons can flow freely through certain materials—superconductors—at relatively elevated temperatures. The reasons for this high-temperature, or “unconventional” superconductivity are still largely unknown.
One of the most promising theories to explain unconventional superconductivity—called the Hubbard model—is simple to express mathematically but is impossible to solve with digital computers.
“The Hubbard model is a set of mathematical equations that could hold the key to explaining high-temperature superconductivity, but they are too complex to solve—even with the fastest supercomputer,” says lead researcher Randy Hulet, a professor of physics and astronomy at Rice University. “That’s where we come in.”Cold atoms
Hulet’s lab specializes in cooling atoms to such low temperatures that their behavior is dictated by the rules of quantum mechanics—the same rules that electrons follow when they flow through superconductors.
“Using our cold atoms as stand-ins for electrons and beams of laser light to mimic the crystal lattice in a real material, we were able to simulate the Hubbard model,” Hulet says. “When we did that, we were able to produce antiferromagnetism in exactly the way the Hubbard model predicts.
“That’s exciting because it’s the first ultracold atomic system that’s able to probe the Hubbard model in this way, and also because antiferromagnetism is known to exist in nearly all of the parent compounds of unconventional superconductors.”Extraordinarily complex
Hulet’s team is one of many that are racing to use ultracold atomic systems to simulate the physics of high-temperature superconductors.
“Despite 30 years of effort, people have yet to develop a complete theory for high-temperature superconductivity,” Hulet says. “Real electronic materials are extraordinarily complex, with impurities and lattice defects that are difficult to fully control.
“In fact, it has been so difficult to study the phenomenon in these materials that physicists still don’t know the essential ingredients that are required to make an unconventional superconductor or how to make a material that superconducts at even greater temperature.”
“From here on out, as we get colder still, we’ll be extending the boundaries of known physics.”
Hulet’s system, described in the journal Nature, mimics the actual electronic material, but with no lattice defects or disorder.
“We believe that magnetism plays a role in this process, and we know that each electron in these materials correlates with every other, in a highly complex way,” he says. “With our latest findings, we’ve confirmed that we can cool our system to the point where we can simulate short-range magnetic correlations between electrons just as they begin to develop.
“That’s significant because our theoretical colleagues—there were five on this paper—were able to use a mathematical technique known as the Quantum Monte Carlo method to verify that our results match the Hubbard model,” Hulet says.
“It was a heroic effort, and they pushed their computer simulations as far as they could go. From here on out, as we get colder still, we’ll be extending the boundaries of known physics.”Cold enough?
Nandini Trivedi, professor of physics at Ohio State University, explained that she and her colleagues at the University of California, Davis, who formed the theoretical side of the effort, had the task of identifying just how cold the atoms had to be in the experiment.
“Some of the big questions we ask are related to the new kinds of ways in which atoms get organized at low temperatures,” she says. “Because going to such low temperatures is a challenge, theory helped determine the highest temperature at which we might expect the atoms to order themselves like those of an antiferromagnet.”Hubbard model
After high-temperature superconductivity was discovered in the 1980s, some theoretical physicists proposed that the underlying physics could be explained with the Hubbard model, a set of equations invented in the early 1960s by physicist John Hubbard to describe the magnetic and conduction properties of electrons in transition metals and transition metal oxides.
Every electron has a “spin” that behaves as a tiny magnet. Scientists in the 1950s and 1960s noticed that the spins of electrons in transition metals and transition metal oxides could become aligned in ordered patterns. In creating his model, Hubbard sought to create the simplest possible system for explaining how the electrons in these materials responded to one another.
The Hubbard model features electrons that can hop between sites in an ordered grid, or lattice. Each site in the lattice represents an ion in the crystal lattice of a material, and the electrons’ behavior is dictated by just a handful of variables.
First, electrons are disallowed from sharing an energy level, due to a rule known as the Pauli Exclusion Principle. Second, electrons repel one another and must pay an energy penalty when they occupy the same site.Why computers don’t help
“The Hubbard model is remarkably simple to express mathematically,” Hulet says. “But because of the complexity of the solutions, we cannot calculate its properties for anything but a very small number of electrons on the lattice. There is simply too much quantum entanglement among the system’s degrees of freedom.”
Correlated electron behaviors—like antiferromagnetism and superconductivity—result from feedback, as the action of every electron causes a cascade that affects all of its neighbors.
Running the calculations becomes exponentially more time-consuming as the number of sites increases. To date, the best efforts to produce computer simulations of two- and three-dimensional Hubbard models involve systems with no more than a few hundred sites.
Because of these computational difficulties, it has been impossible for physicists to determine whether the Hubbard model contains the essence of unconventional superconductivity. Studies have confirmed that the model’s solutions show antiferromagnetism, but it is unknown whether they also exhibit superconductivity.How Hulet’s approach is different
In the new study, Hulet and colleagues, including postdoctoral researcher Russell Hart and graduate student Pedro Duarte, created a new experimental technique to cool the atoms in their lab to sufficiently low temperatures to begin to observe antiferromagnetic order in an optical lattice with approximately 100,000 sites. This new technique results in temperatures on the lattice that are about half of that obtained in previous experiments.
“The standard technique is to create the cold atomic gas, load it into the lattice, and take measurements,” Hart says. “We developed the first method for evaporative cooling of atoms that had already been loaded in a lattice.
“That technique, which uses what we call a ‘compensated optical lattice,’ also helped control the density of the sample, which becomes critical for forming antiferromagnetic order.”Even colder
Hulet says a second innovation was the team’s use of the optical technique called Bragg scattering to observe the symmetry planes that are characteristic of antiferromagnetic order.
He says the team will need to develop an entirely new technique to measure the electron pair correlations that cause superconductivity. And they’ll also need colder samples, about 10 times colder than those used in the current study.
“We have some things in mind,” Hulet says. “I am confident we can achieve lower temperatures both by refining what we’ve already done and by developing new techniques.
“Our immediate goal is to get cold enough to get fully into the antiferromagnetic regime, and from there we’d hope to get into the d-wave pairing regime and confirm whether or not it exists in the Hubbard model.”
The Defense Advanced Research Projects Agency, the National Science Foundation, the Robert Welch Foundation, and the Office of Naval Research provided support.
Source: Rice University
Treating human health and society as part of an ecosystem could help us overcome problems like the antibiotic crisis and the obesity epidemic, according to new research.
The living world is by nature a collaborative enterprise rather than a competitive one, says Professor Mark Wahlqvist of Monash University.
“It is unhelpful to look at ourselves as discrete species as the interconnectedness of all things, animate and inanimate, becomes more apparent,” he says.
In research published in the Asia Pacific Journal of Clinical Nutrition, Wahlqvist says awareness is growing of the ecosystem-dependent nature of human health.
“The problem now faced is that ecosystems have been plundered in such an anthropocentric fashion that their sustainability is precarious and our health with it,” he says.
Calling for a re-evaluation of many ecosystems, from the home, school, and work-place to health care, communication, transport, and recreation, Wahlqvist says we had become accustomed to blaming disease and dysfunction on one factor, or a small set of factors.
Such views had contributed to the rise of medications such as antibiotics, as well as their probable imminent failure.
“We confront multiple-resistant microorganisms in farm animals and ourselves that no currently available antibiotic can eradicate, not least because of their misuse as growth promotants in livestock for human consumption,” he says.
“Better ecosystem management is likely to be one of the few solutions available to this crisis.”
Wahlqvist also says more integrative approaches to health-care practice were required.
He addresses seemingly simple measures, such as eating a varied and home-cooked diet that is largely derived from plants; walking 30 to 40 minutes a day; keeping a garden; and ensuring access to a natural environment.
These efforts can go a long way towards ensuring general human health and longevity plus environmental sustainability.
“A sense of ourselves as ecological creatures is needed, planning as families and communities to reduce environmental pressure, and maintain and renew ecosystems,” he says.
“A whole global movement is needed to provide hope for future generations.”
Source: Monash University
Younger women are more likely than men of the same age to overlook the earliest signs of a heart attack. The findings may help explain why so many die.
A new study examined the experiences of women ranging in age from 30 to 55 years old who were hospitalized with acute myocardial infarction (AMI). In-depth interviews revealed how the women responded during the crucial period when first symptoms, including pain and dizziness, appeared and when they decided to seek medical care.5 key findings
Through in-depth interviews with the young women, the nine-member research team explored how the women responded during the crucial period when their first symptoms manifested and they decided to seek medical care. The results suggest five reasons why:
- Women’s initial symptoms vary widely in both nature and duration.
- Women inaccurately assess their personal risk of heart disease.
- External factors (such as work and family) can influence the decision to seek emergency medical help.
- Not all patients receive a prompt or complete workup for their AMI symptoms or a formal diagnosis.
- Women don’t routinely access primary care, including preventive care for heart disease.
“Young women with multiple risk factors and a strong family history of cardiac disease should not assume they are too young to have a heart attack,” says lead researcher Judith Lichtman, associate professor and chair of the chronic disease epidemiology department at Yale University.
“Participants in our study said they were concerned about initiating a false alarm in case their symptoms were due to something other than a heart attack. Identifying strategies to empower women to recognize symptoms and seek prompt care without stigma or perceived judgment may be particularly critical for young women at increased risk for heart disease.”
The study results suggest more needs to be done to educate women about the early symptoms of a heart attack and to change the way that both women and health care providers respond to the symptoms, says Leslie Curry, senior research scientist at the Yale Global Health Leadership Institute and senior author on the paper, that is published in the journal Circulation.
Each year, in the United States alone, more than 15,000 women under age 55 die from heart disease, ranking it as a leading cause of death for this age group.
In addition to promoting knowledge about heart disease and encouraging more prompt care-seeking behaviors, another important goal for this population of women is improving preventive heart care, Lichtman says.
The Fannie E. Rippel Foundation and grants from the National Heart, Lung, and Blood Institute funded the study. Researchers from Northwestern University, Saint Luke’s Mid America Heart Institute, and the University of Missouri-Kansas City contributed to the work.
Source: Yale University
Math difficulties can appear in some children from low socioeconomic status households as early as by age two. Early screening and intervention can help, say experts.
Previous studies have shown that young children who have difficulty with math will continue to have difficulty as they get older. But until now, little was known about which children were most at risk.
The new study, published in the Journal of Learning Disabilities, strongly indicates that a family’s economic status is a significant factor in whether or not a child will continue to have trouble with math.What can schools do?
“Schools can’t do much to change a family’s economic circumstances,” says Paul L. Morgan, associate professor of education at Penn State, “but schools can decide how they allocate extra resources and how early they intervene to help children who seem to be struggling academically.”
Early screening and intervention efforts should begin when a child starts school—and should be multi-faceted to target early mathematics, reading difficulties, and behavior problems.
Attending preschool or Head Start can lower the risk of persistent math difficulties, Morgan says.
“Before entering school, children may not have much informal exposure to mathematics. Conversations and activities that include talking about mathematics may help reduce children’s later struggles when they are being taught more formally in the elementary- and middle-school grades,” he says.Waiting to fail
For the current study, researchers analyzed two nationally representative, longitudinal data sets of US children maintained by the Department of Education’s National Center for Education Statistics. One sample of children was followed from birth to kindergarten; the other was followed from kindergarten to the end of eighth grade.
For preschool children, factors that increased the children’s risk for persistent math difficulties included low general cognitive functioning, vocabulary difficulties, and being from low socioeconomic status households.
For elementary- and middle-school students, reading difficulties, mathematics difficulties, and attention-related behavioral difficulties increased risk, as did being from lower socioeconomic households.
“It appears that children who struggle in mathematics often do not ‘grow out of it,’ and so a ‘wait and see’ approach might only have ‘wait to fail’ consequences for many children.” Morgan says.
The US Department of Education’s Institute of Education Sciences funded the study.
Source: Penn State
Although pointing out problems and suggesting solutions can both help a company improve, employees may want to find the right mix of the two.
Focusing on the negative side can cause workers to become mentally fatigued and defensive and experience a drop-off in production, according to new research.
While both behaviors can help a company, it’s important that workers find a balance between the two, says Russell Johnson, a management professor at Michigan State University.
“The moral of this story is not that we want people to stop raising concerns within the company, because that can be extremely beneficial,” says Johnson, a faculty member in the Broad College of Business.
“But constantly focusing on the negative can have a detrimental effect on the individual.”Mental fatigue
The current research is the first to examine the effects of positive and negative workplace suggestions on the person engaged in the behavior. The study involved two field surveys of more than 300 total workers in a variety of occupations such as accounting, retail, manufacturing, and health care.
Johnson says workers who regularly point out problems or errors might be mentally fatigued because this often means they’re highlighting other workers’ shortcomings and causing tension in these relationships.
“The irony of that is, when people are mentally fatigued they’re less likely to point out problems anymore,” Johnson says.
“In addition, their own work performance suffers, they’re less likely to be cooperative and helpful, and they even exhibit deviant behaviors such as being verbally abusive and stealing from the employer.”
Johnson suggests companies consider rewarding employees who point out problems that lead to improvements.
“In that case, maybe other employees would be more accepting of someone pointing out errors if they know this is what the company wants them to do—that the person isn’t acting outside the norm.”
Johnson and doctoral student Szu-Han Lin report their findings online in the Journal of Applied Psychology.
Source: Michigan State University
La Niña-like conditions in the Pacific Ocean are closely linked to an abrupt stoppage of coral growth that lasted thousands of years. Will today’s climate push reefs to another collapse?
For a new study, researchers traveled to Panamá to collect a reef core, and then used the corals within the core to reconstruct what the environment was like as far back as 6,750 years ago.
The findings show that cooler sea temperatures, greater precipitation, and stronger upwelling were evident about 4,100 years ago when reef accretion in the region suddenly stopped.Coral collapse triggers
“Investigating the long-term history of reefs and their geochemistry is something that is difficult to do in many places, so this was a unique opportunity to look at the relationship between reef growth and environment,” says Kim Cobb, associate professor in the School of Earth and Atmospheric Sciences at Georgia Institute of Technology.
“This study shows that there appears to have been environmental triggers for this well-documented reef collapse in Panama.”
Climate change is the leading cause of coral-reef degradation. The global coral reef landscape is now characterized by declining coral cover, reduced growth, and calcification, and slowdowns in reef accretion.
The new data will help scientists understand how changes in the environment trigger long-term changes in coral reef growth and ecosystem function—a critical challenge to coral-reef conservation.Cool and wet
“Temperature was a key cause of reef collapse and modern temperatures are now within several degrees of the maximum these reefs experienced over their 6,750 year history,” says lead author Lauren Toth, who was a graduate student at Florida Institute of Technology during the study.
“It’s possible that anthropogenic climate change may once again be pushing these reefs towards another regional collapse.”
For the study, published in Nature Climate Change, researchers analyzed a 6,750-year-old coral core from Pacific Panamá. They then reconstructed the coral’s past functions, such as growth and accretion (accumulation of layers of coral), and compared that to surrounding environmental conditions before, during, and after the 2,500-year hiatus in vertical accretion.
“We saw evidence for a different climate regime during that time period,” Cobb says. “The geochemical signals were consistent with a period that is very cool and very wet, with very strong upwelling, which is more like a modern day La Niña event in this part of the Pacific.”Sensitive to change
In Pacific Panamá, La Niña-like periods are characterized by a cold, wet climate with strong seasonal upwelling. Due to limited data at the site, researchers can’t quantify the intensity of La Niña events during this time, but document that conditions similar to La Niña were present.
“These conditions would have been for quite an extended time, which suggests that the reef was quite sensitive to prolonged change in environmental conditions,” Cobb says. “So sensitive, in fact, that it stopped accreting over that period.”
Future climate change, similar to the changes during the hiatus in coral growth, could cause coral reefs to behave similarly, the study suggests, leading to another shutdown in reef development in the tropical eastern Pacific.
“We are in the midst of a major environmental change that will continue to stress corals over the coming decades, so the lesson from this study is that there are these systems such as coral reefs that are sensitive to environmental change and can go through this kind of wholesale collapse in response to these environmental changes,” Cobb says.
Future work will involve expanding the study to include additional locations throughout the tropical Pacific.
“A broad-scale perspective on long-term reef growth and environmental variability would allow us to better characterize the environmental thresholds leading to reef collapse and the conditions that facilitate survival,” Toth says.
“A better understanding of the controls on reef development in the past will allow us to make better predictions about which reefs may be most vulnerable to climate change in the future.”
The Geological Society of America, the American Museum of Natural History, and the Smithsonian Institution’s Marine Science Network supported the study.
Source: Georgia Tech
Therapists and clinicians who skirt the edges of religion and spirituality may overlook the greatest source of resilience or the key to psychological issues among many of their clients.
That’s the argument of Spirituality, Religion, and Faith in Psychotherapy: Evidence-Based Expressive Methods for Mind, Brain, and Body (Lyceum Books, 2014), a new book by Helen Land, associate professor with the USC School of Social Work.
“We are becoming more and more of a secular society, but I think this idea of what people hold as sacred, whether it’s religious or not, will always be useful,” Land says. “Everyone has some sort of philosophy of life, and it doesn’t have to be connected to a deity or organized religion.”
Although spirituality has found its way into clinical practice in a general sense, typically viewed as a source of wellness for individuals dealing with trauma, death, or other difficult experiences, few clinicians truly delve into the sacred beliefs of their clients and how those beliefs influence their ability to cope with problems or may actually be causing those problems.Tough to talk about
In her book, Land outlines various strategies to integrate three broad domains of sacred content into psychotherapy, particularly through the use of expressive methods such as art, movement, and music therapy.
“Often people are stuck in concrete thinking,” she says. “These kinds of issues—spirituality, religion, and faith—are very hard to put into words. Talk therapy has its limitations.”
By engaging the creative and intuitive aspects of the brain, clients can begin to unlock and process memories and experiences of trauma or pain, she says. The book offers a discussion of how that process occurs at the neurobiological level when using these expressive practices, including research evidence that supports these methods as effective.
“The expressive traditions are horribly underutilized,” Land says. “Basically they have been the domain of art therapists and music therapists and so on. There are many interventions that can be used quite easily and have way more robust research in terms of their efficacy than most of us even know about.”Coping strategies
Land’s interest in the sacred world as a potentially critical component of therapy for many individuals stems from her work leading support groups during the height of the AIDS crisis. During a research study on how people cope with the stress of caregiving for a loved one with AIDS, she noticed that many times these caregivers would not tell other family members or friends about the disease.
So how were they handling this overwhelming personal burden?
“The thing that came up over and over again was, I turn to God or I pray,” she says. “Many people used these spiritual and religious coping strategies.”
Issues such as illness, death, and trauma often spur individuals to consider previously unacknowledged existential questions, Land says.
For example, she worked with a woman whose family had been killed by a drunk driver when she was a senior in high school, leaving her devastated and angry. In another instance, a woman in her 30s died from ovarian cancer, leaving behind her husband and 18-month-old daughter.
“It can be very disabling for people who are in mourning,” Land says, referring to the struggle to rectify their loss with their spiritual or religious convictions. “Some people would maybe deepen their spirituality and call on it. Other people would say they truly felt let down by their belief system.”Finding balance
In the book, she describes a new assessment model she developed to help clinicians evaluate the sacred beliefs of their clients in addition to the standard psychological, biological, and social factors.
Understanding how their clients perceive religion and spirituality can help therapists determine if a particular sacred domain is influencing their ability to recover from trauma. Land theorizes that maintaining balance in the sacred triad—religion, faith, and spirituality—can be critical to psychological well-being.
Each domain has strengths and drawbacks, she says. Religion tends to cause the most difficulty, particularly if individuals are struggling with certain tenets that conflict with their personal identity, such as dogma regarding sexual orientation. In such cases, religion can be very scarring.
“But some people have problems with loss of faith,” Land says. “Other people have problems doing lots of rituals and prayers but really not having a very enriched spiritual life, if any.”
Smoking marijuana causes neurons that normally suppress appetite to signal an uncontrollable urge to eat instead.
For a new study, researchers monitored the brain circuitry that promotes eating by selectively manipulating the cellular pathway that mediates marijuana’s action in transgenic mice.
“By observing how the appetite center of the brain responds to marijuana, we were able to see what drives the hunger brought about by cannabis and how that same mechanism that normally turns off feeding becomes a driver of eating,” says Tamas Horvath, professor of neurobiology and of obstetrics, gynecology, and reproductive sciences at Yale University.
“It’s like pressing a car’s brakes and accelerating instead. We were surprised to find that the neurons we thought were responsible for shutting down eating, were suddenly being activated and promoting hunger, even when you are full. It fools the brain’s central feeding system.”
In addition to helping explain why you become extremely hungry when you shouldn’t be, the new findings, published in Nature, could provide other benefits, like helping cancer patients who often lose their appetite during treatment.
Researchers have long known that using cannabis is associated with increased appetite even when you are full. It is also well known that activating the cannabinoid receptor 1 (CB1R) can contribute to overeating. A group of nerve cells called pro-opiomelanocortin (POMC) neurons are considered to be key drivers of reducing eating when full.
“This event is key to cannabinoid-receptor-driven eating,” says Horvath, who points out that the feeding behavior driven by these neurons is just one mode of action that involves CB1R signaling. “More research is needed to validate the findings.”
Whether this primitive mechanism is also key to getting “high” on cannabis is another question that Horvath, who is also director of the Yale Program in Cell Signaling and Neurobiology of Metabolism and chair of the Section of Comparative Medicine lab, is aiming to address.
The National Institutes of Health, the American Diabetes Association, The Klarmann Family Foundation, the Helmholtz Society, and the Deutsche Forschungsgemeinschaft (Obesity Mechanisms) funded the study.
Source: Yale University
Eyelash length unites 22 species of mammals—humans, hedgehogs, giraffes, and more. Their ideal lash length is one-third the width of the eye.
Anything shorter or longer, including the fake eyelashes that are popular in Hollywood and make-up aisles, increases airflow around the eye and leads to more dust hitting the surface.
“Eyelashes form a barrier to control airflow and the rate of evaporation on the surface of the cornea,” says study author Guillermo Amador, a PhD candidate in the George W. Woodruff School of Mechanical Engineering at Georgia Tech.
“When eyelashes are shorter than the one-third ratio, they have only a slight effect on the flow. Their effect is more pronounced as they lengthen up until one-third. After that, they start funneling air and dust particles into the eye.”Not too long, not too short
Amador and the research team, which is led by mechanical engineering Associate Professor David Hu, sent a student to the American Museum of Natural History in New York in 2012 to measure eyes and eyelashes of various animals.
Aside from an elephant, which has extremely long eyelashes, every species studied had evolved to the same ratio of lash length to eye width.
The team then built the wind tunnel to re-create air flows on a mimic of an adult, human eye. A 4-millimeter deep, 20-millimeter diameter aluminum dish served as the cornea. It sat on top of an acrylic plate, which imitated the rest of the face. Mesh surrounded the dish to replicate the eyelashes.
They discovered the ideal ratio while varying the mesh length during evaporation and particle deposition studies.
“As short lashes grew longer, they reduced air flow, creating a layer of slow-moving air above the cornea,” says Hu, who is also a faculty member in the School of Biology. “This kept the eye moist for a longer time and kept particles away. The majority of air essentially hit the eyelashes and rolled away from the eye.”
The opposite process occurred with longer eyelashes. The lashes extended further into the airflow and created a cylinder. The air and its molecules channeled toward the eye and led to faster evaporation.Fake lashes
“This is why long, elegant, fake eyelashes aren’t ideal,” says Amador. “They may look good, but they’re not the best thing for the health of your eyes.”Related Articles On Futurity
- University of YorkCan digital photos catch criminals in eye reflections?
- University of PennsylvaniaStem cell feat may lead to fix for baldness
- Brown UniversityGuinea fowl feet recreate dino tracks
There are exceptions, though. The research team notes that people who can’t grow eyelashes could wear fake ones, if they’re the correct length, for extra protection and to reduce dry eye.
“Even if they’re not the correct length, more eyelashes are always better than less,” says Alexander Alexeev, an associate professor in the School of Mechanical Engineering.
“If fake eyelashes are dense enough, they may give the same overall effect in protecting the eye even if they are longer than one-third.”
They team also says the findings could be used to create eyelash-inspired filaments to protect solar panels, photographic sensors, or autonomous robots in dusty environments.
The study appears in the Journal of the Royal Society Interface.
The National Science Foundation supported the work. Any opinions expressed in this article are those of the authors and do not necessarily reflect the official views of the sponsoring organizations.
Source: Georgia Tech
A new study investigates how the tobacco industry managed to raise the levels of tar and nicotine in “light” cigarettes for decades without a regulatory crackdown—despite mounting proof of health hazards.
The findings could shed light on the current controversy over electronic cigarettes.
Heavily marketed as a safer, healthy alternative to smoking, electronic cigarettes are under fire from California health officials, who have declared “vaping” a public health threat.Creative marketing
“The e-cigarette industry has proven to be very creative in their product marketing,” says study coauthor Greta Hsu, associate professor at the University of California, Davis, Graduate School of Management.
“There’s a great deal of ambiguity about product content in the largely unregulated e-cigarette industry right now, and considerable debate over the safety, long-term risks, and effects of secondhand smoke exposure.
“At the same time, acceptance and popularity of e-cigarettes is rapidly growing, creating a market where consumers are vulnerable.”
E-cigarettes may be just one of many markets where firms strategically decouple the actual features of products from the features expected by their customers, researchers say.
“For example, as labels such as ‘low fat’ and ‘low sugar’ became increasingly taken-for-granted shortcuts for the notion of ‘healthy’ food, there is some evidence that companies increasingly manipulated underlying product characteristics to make them more palatable, such as adding more sugar or fat and adjusting the serving size to mask the increase,” says coauthor Stine Grodal, assistant professor at Boston University’s School of Management. “It’s scope creep, and it’s deceiving.”Marketing ‘lights’
Starting in the early 1960s, in the face of increasing public scrutiny, US tobacco firms marketed “lights” as a new, safer type of cigarette due to their low tar and nicotine content. By the 1990s, a number of light brands exceeded their full-flavor counterparts in deliveries of both components.
Using evidence from tobacco firm internal documents, the study shows that consumers decreased their scrutiny of the tar and nicotine levels of light cigarette brands as they became increasingly familiar with the light category.
Tobacco firms in turn strategically used this lack of scrutiny to increase the tar and nicotine deliveries of both new and established light cigarette brands. It wasn’t until 2009 that the federal government finally stepped in to regulate tobacco products with the Family Smoking Prevention and Tobacco Control Act.
“While one may be tempted to regard cigarettes as an extreme case due to its links with addiction, we believe this kind of widespread manipulation of shared categorical understandings takes place in a variety of markets,” the authors write.Related Articles On Futurity
- McGill UniversityHow much junk food does your neighborhood eat?
- Penn StateHow the Picts got their fierce reputation
- Johns Hopkins UniversityE-cigarettes may open lungs to the flu
Researchers and the media have highlighted the potential for manipulation in several growing market categories, including organic produce, “green” products such as hybrid cars and energy-saving appliances, nontoxic beauty products, and sectors defined around craft techniques, personnel, or ingredients such as microbrews, wild fishing, and Greek yogurt.
“In such cases, without the presence of regulatory watchdogs setting and upholding clear standards, the opportunities for and likelihood of manipulations are expected to increase,” Hsu says. “One lesson is that monitoring must be done by a trusted source.”
The findings appear in the February issue of the American Sociological Review.
Source: UC Davis
Coastal communities in as many as 15 US states, which depend on the nation’s $1 billion shellfish industry, face long-term economic risk due to ocean acidification, experts warn.
A new study shows that vulnerable communities go far beyond the Pacific Northwest—long a primary focal point of attention and resources—and include Maine, the Chesapeake Bay, and the Louisiana bayou.
While Northern California has a strong economic dependency on shellfish and is highly exposed to ocean acidification, the area was not a large focus of the new report that is published in Nature Climate Change.Lucrative mollusks
Researchers say the area is less vulnerable than other coastal regions in the United States due to the state’s early efforts to address ocean acidification and climate change. Lack of economic data also made assessing the impact to California and other West Coast regions more difficult.
“Ocean acidification has already cost the oyster industry in the Pacific Northwest nearly $110 million and jeopardized about 3,200 jobs,” says Julia Ekstrom, who was lead author of the research while a scientist with Natural Resources Defense Council and is now director of the climate adaptation program at the Policy Institute for Energy, Environment and the Economy at University of California, Davis.
“Our research shows, for the first time, that many communities around the US face similar risks.”
Ocean acidification happens when oceans absorb growing amounts of carbon dioxide produced by burning fossil fuels. Acidic waters make it more difficult for creatures with calcium carbonate shells or skeletons, including mollusks, crabs, and corals, to grow shells and survive.
Mollusks, known to be particularly sensitive to ocean acidification, are among the most lucrative and sustainable fisheries in the United States.Livelihoods at risk
For the study, researchers integrated physical, economic and social data into an assessment of various regions’ overall vulnerability to ocean acidification. The findings show that risk factors are threefold:
- Physical: Local factors, such as local nutrient pollution from agricultural runoff, amplify the global phenomenon acidification.
- Economic: Industry factors, such as total revenues, influence the importance of shellfish to a community.
- Social: Diversity of local employment decrease communities’ capacity to cope with change.
“Our analysis shows acidification will harm more than ocean creatures; it will have real impacts on people’s lives,” says Lisa Suatoni, NRDC Oceans Program senior scientist. “It will pinch pocketbooks, it will put livelihoods at risk, and it will alter the fabric of communities all across the country.”Hot zones
Varying combinations of risk factors have created several unique “hot zones” around the country, researchers say:
- New England: Productive ports of downeast Maine and southern Massachusetts, where poorly buffered rivers run into cold New England waters, which are especially enriched in “acidifying” carbon dioxide.
- Mid-Atlantic: East Coast estuaries like Narragansett Bay, Chesapeake Bay, and Long Island Sound, where an abundance of nitrogen pollution exacerbates ocean acidification in shellfish-rich areas.
- Gulf of Mexico: Terrebonne and Plaquemines Parishes of Louisiana—and other communities in the Gulf of Mexico —where the shelled-mollusk industry is limited to oysters, gives this region fewer options for alternative, potentially more resilient, mollusk fisheries in the short term.
- Pacific Northwest: The Oregon and Washington coasts and estuaries, where a potent combination of risk factors converge, including cold waters, upwelling currents that bring corrosive waters closer to the surface, corrosive rivers, and nutrient pollution from land runoff.
Of particular concern are the study’s findings that many of the most economically dependent regions are currently the least prepared to respond.Related Articles On Futurity
- Duke UniversityWhales hang out in Antarctic later into autumn
- University of FloridaPacific's great white sharks are doing 'better than okay'
- Washington University in St. LouisGut microbes don't 'grow up' after malnutrition
States such as Massachusetts, New Jersey, Virginia, and Louisiana have minimal research and monitoring for ocean acidification and little government support at federal or state levels to reduce their risk.
Since the current assessment focused on mollusks, it offers only one slice of overall vulnerability within the ecosystem. Researchers say the analysis should also be applied to a broader set of at-risk species, such as crabs and coral, and the services they provide.
While reducing global carbon emissions is the ultimate solution, the researchers say localized solutions can be implemented:
- Reduce local pollutants, such as agricultural runoff in the Chesapeake Bay.
- Diversify fishing fleets and investment in aquaculture of high-value shellfish species in southern Massachusetts (raising shellfish away from souring waters).
- Develop “early warning” systems for corrosive waters in the Pacific Northwest.
- Cultivate acidification-resistant strains of oysters in the Gulf of Mexico.
“There is plenty we can do to help these at-risk communities while protecting our environment,” Suatoni says. “Tailored action plans should be developed for each ocean acidification hot zone. The time to act is now.”
Source: UC Davis
Hispanics develop alcoholic liver disease between four and 12 years earlier than other races, a new study shows.
The findings suggest that something other than alcoholism may be at the root of alcoholic liver disease (ALD), a common cause of liver-disease death.
While previous research has indicated that Hispanics tend to have more severe ALD than other populations, the new study is believed to be the first to pinpoint racial and ethnic disparities in the ages at which symptoms first appear.More aggressive counseling
“Clinicians typically evaluate older patients for liver disease when moderate or heavy alcohol use has been long-term,” says senior author Valentina Medici, associate professor of internal medicine at the University of California, Davis.
“We should be more aggressive in counseling patients about the importance of sobriety and testing them for ALD at younger ages, especially our Hispanic patients.”
For a new study published online in the journal Alcoholism: Clinical and Experimental Research, researchers looked at the records of nearly 800 patients diagnosed between 2002 and 2010 with one of the three progressive stages of ALD caused by alcohol consumption—alcoholic fatty liver, alcoholic hepatitis, or alcoholic cirrhosis.
They also assessed patients’ drinking patterns, laboratory data, body mass indexes, and additional health conditions such as metabolic syndrome and diabetes. Patients who had diseases such as hepatitis-B or who were HIV-positive were excluded from the study to avoid potential confounding effects on ALD onset and severity.What else is going on?
The most striking differences were in the average ages of onset of alcoholic fatty liver: 41 for Hispanic patients, 51 for whites/Caucasians, and 53 for African Americans. This is the first stage of ALD and the point at which intervention can be most successful at reversing liver damage.Related Articles On Futurity
- University of Texas at AustinWhites in U.S. still live longer than blacks
- University of North Carolina at Chapel HillAlcohol botches brain’s rebound from trauma
- University of MissouriTo feel more mature, cut back on booze
The average ages of onset for alcoholic hepatitis were 41 for Hispanic patients, 47 for whites/Caucasians and 48 for African Americans. For alcoholic cirrhosis, the average ages of onset were 49 for Hispanics, 53 for whites/Caucasians, and 54 for African Americans.
Hispanics who had end-stage ALD were also more likely to be obese and diabetic than white/Caucasian and African American patients.
“Our findings suggest that alcoholic liver disease is caused by more than chronic alcoholism,” says Charles Halsted, a study coauthor and professor emeritus of internal medicine.
“Future research should focus on genetic, metabolic, and environmental factors that may increase the susceptibility of Hispanics to this disease.”
The National Institutes of Health and the UC Davis Division of Gastroenterology and Hepatology funded the study.
Source: UC Davis
The post Alcoholic liver disease strikes Hispanics years earlier appeared first on Futurity.
Humans and the cities we build are driving evolutionary changes in the creatures and plants around us faster than previously thought, new research shows.
The signs are small but striking: Spiders in cities are getting bigger. Salmon in rivers are getting smaller. Birds in urban areas are growing tamer and bolder, outcompeting their country cousins.
A new paper by Marina Alberti of the University of Washington College of Built Environments’ Urban Ecology Research Lab suggests that if human-driven evolutionary change affects the functioning of ecosystems—as evidence is showing—it “may have significant implications for ecological and human well-being.”
Until recently, it was assumed that evolutionary change would take too long to affect ecological processes quite so immediately, says Alberti, a professor of urban design and planning.
“We now have evidence that there is rapid evolution. These changes may affect the state of the environment now. This is what’s called eco-evolutionary feedback.
“Cities are not simply affecting biodiversity by reducing the number and variety of species that live in urban habitats,” Alberti says.‘Human signatures’
Humans in cities are causing organisms to undergo accelerated evolutionary changes “that have effects on ecosystem functions such as biodiversity, nutrient cycling, seed dispersal, detoxification, food production, and ultimately on human health and well-being.”
In the paper, published in Trends in Ecology & Evolution, Alberti systematically reviews evidence of “human signatures,” or documented examples of human-caused trait changes in fish, birds, mammals, and plants, and their effects on ecosystem function.
In addition to the shrinking salmon, she cites earthworms with increased tolerance to metals, seeds of some plants dispersing less effectively and a type of urban mouse that is a “critical host” for the ticks that carry Lyme disease, leading to spikes in human exposure to the illness.
Songbirds are becoming tamer and bolder and also are changing their tunes to ensure their acoustic signals are not lost in the noisy urban background. European blackbirds are becoming sedentary and have changed their migratory behavior in response to urbanization.Related Articles On Futurity
- Indiana UniversityForest diversity hinges on seedling survival
- Emory UniversityCapuchins never forget a familiar face
- University of California, DavisTo beat stem rust, wheat crops get new gene
Humans in cities cause these changes through a variety of ways, Alberti says. Our urbanization alters and breaks up natural vegetation patterns, introduces toxic pollutants and novel disturbances such as noise and light, and increases the temperature. Human presence also changes the availability of resources such as food and water, altering the life cycle of many species.
Alberti says the emerging evidence prompts serious questions with implications for the focus and design of future studies:
- Can global rapid urbanization affect the course of Earth’s evolution?
- Is urbanization moving the planet closer to an environmental tipping point on the scale of the Great Oxidation Event that introduced oxygen into the atmosphere more than 2 billion years ago?
- Might different patterns of urbanization alter the effect of human action on eco-evolution?
Still, Alberti says hers is not a “catastrophic” perspective, but one that highlights both the challenges and the unique opportunity that humans have in shaping the evolution of planet Earth.
Ecosystems in urban environments are a sort of hybrid, she says: “It is their hybrid nature that makes them unstable, but also capable of innovating.” She explores the theme further in a book to be published in spring 2016, titled Cities as Hybrid Ecosystems.
“We can drive urbanizing ecosystems to collapse—or we can consciously steer them toward a resilient and sustainable future,” Alberti says. “The question is whether we become aware of the role we are playing.”
Source: University of Washington
The best way to protect kids from peanut allergies may be to feed them foods that contain peanuts when they’re babies. But will wary parents trust that advice?
Peanut allergies, one of the most common food allergies, have more than doubled in the UK and North America over the past 10 years.
The allergy, which develops early in life, can cause anaphylaxis, a severe and potentially life-threatening allergic reaction. Peanut allergies are rarely outgrown and there is currently no cure.80% reduction
For the Learning Early About Peanut Allergy (LEAP) study, researchers enrolled 640 children aged four to 11 months who were considered at high risk of developing peanut allergy due to pre-existing severe eczema and/or egg allergy.
Half of the children ate peanut-containing foods three or more times a week and the other half avoided eating peanut products until five years of age.
The findings show that less than 1 percent of children who consumed peanut and completed the study developed peanut allergy by five years of age compared to 17.3 percent in the avoidance group.
The overall prevalence of allergy in all children asked to consume peanut—including those participants who were unable to tolerate peanut consumption—was 3.2 percent versus 17.2 percent in the avoidance group—an 80 percent reduction in the prevalence of peanut allergy.Bad advice?
“For many years, guidelines and pediatricians have recommended that infants avoid peanut,” says Graham Roberts, professor of allergy and respiratory medicine at the University of Southampton.
“However, this study shows that early, sustained consumption of peanut is safe and results in a substantial and significant reduction in the development of peanut allergy in high-risk infants by five years of age. As a result, this questions whether children should be deliberately avoiding peanut in the first year of life to prevent allergy.”
“This is an important clinical development and contravenes previous guidelines,” says Gideon Lack, head of the department of pediatric allergy at King’s College and lead author of the study published in the New England Journal of Medicine.Related Articles On Futurity
- Northwestern UniversityLow vitamin D in newborns of obese moms
- Brown UniversityPregnant women may smoke more if they feel detached
- University of MissouriBabies know a bully when they see one
“Whilst these were withdrawn in 2008 in the UK and US, our study suggests that new guidelines may be needed to reduce the rate of peanut allergy in our children.”
Because the study excludes infants showing early strong signs of having already developed peanut allergy, the safety and effectiveness of early peanut consumption in this group remains unknown and requires further study, the researchers say.
“Parents of infants and young children with eczema and/or egg allergy should consult with an allergist, pediatrician, or their GP prior to feeding them peanut products,” he says.
“The next stage of our work, the LEAP-On study, will continue to monitor those children who consumed peanut to see if they remain protected against allergy even if they stop consuming peanut for 12 months.
The LEAP-On study will help establish if the protection provided against the development of peanut allergy is sustained and not dependent on ongoing peanut ingestion.”
Source: University of Southampton
Most people find the idea of asexuality, a life free of sexual attraction, baffling at first. But that’s changing, according to the editor of a new book on the topic.
“Society has normalized certain levels of sexual desire while pathologizing others,” says Karli Cerankowski, a lecturer in Stanford University’s Program in Writing and Rhetoric.
“In a sense, it’s the social model that’s broken, not asexuals.”
Although sex and sexuality are centralized, prized aspects of our culture, Cerankowski says that “if we recognize the diversity of human sexuality, then we can understand that there are some people who just don’t experience sexual attraction or have a lower sex drive or have less sex, and that doesn’t mean there is something wrong with them.”
Cerankowski and her co-editor, Megan Milks, recently published Asexualities: Feminist and Queer Perspectives (Routledge 2014), the first collection of essays on asexuality—and the second book ever to be written on the topic.
Contributors from a variety of disciplines pursue the subject through scientific, sexological, psychoanalytical, and political models.Pleasure without sex
Cerankowski’s own research reveals that people are capable of obtaining just as much contentment from other areas of life, and complete gratification in life doesn’t necessarily include sexual gratification.
“We sort of prioritize sexual pleasure and sexual fulfillment in our lives, but we can think about the other ways that people experience intense pleasure, like when listening to music,” Cerankowski says.
Cerankowski’s studies of asexuality found their home under the expansive umbrella of queer and sexuality studies, which she says assists in the acceptance of asexuality as a legitimate sexual orientation.
In Cerankowski’s dissertation on the ways asexuality is misunderstood in American culture, she traces “the history of the creation of sexual categories” through an extensive study of text and media from pop culture as well as historical works, including collections of sexology texts in the Stanford University Libraries. She received her PhD from Stanford’s Program in Modern Thought and Literature last year.A ‘vast spectrum’
The prefix “a-” means “not,” and so some might mistakenly assume that asexual must mean “not sexual” and that all asexual people are entirely uninterested in sex and love in any capacity.
But, as Cerankowski points out, the plural in the book’s title is no accident. They use the plural to evoke the intricacies involved in the vast spectrum of asexuality, to be compatible with the “more commonly understood model of fluid and multiple sexualities.”
As Cerankowski has found, studying and thinking about asexuality brings up broader implications of what pleasure means to humans.
In one scenario, an asexual person might be married, living with a partner, and having regular intercourse. This person might be a romantic asexual, meaning someone who experiences strong, intimate, and romantic feelings for another person but engages in sexual behavior only for procreative purposes or as a means of experiencing intimacy.
Another scenario might involve an a-romantic asexual, who is completely uninterested in romantic attachment or sexual encounters altogether, but finds satisfaction in other arenas of life. And, to debunk a common myth about sexuality, this a-romantic, asexual person is not necessarily any less fulfilled than a person with romantic and sexual drive.Rights and recognition
Cerankowski’s work raises the question: why is now the ideal time for recognition of the asexual community and of asexuality as an orientation? Cerankowski points to the recent evolution of asexuality acceptance as the next natural step in equal rights.Related Articles On Futurity
- University of ChicagoCouples communicate same to strangers
- University of FloridaRelationship honesty? Depends on who you ask
- New York UniversityYankee fans misjudge miles to Fenway
Much as homosexuality was once consistently pathologized by the public, though, the asexual community faces similar contention.
“There’s a whole history we’re building upon with feminist movements, with queer movements and LGBT politics that have really established a ground on which people can think about sexuality in different ways,” says Cerankowski. “Asexuality seems like the next frontier for that reframing of sexuality.”
Cerankowski points to the countless forums, blogs, and YouTube channels that provide platforms for open discussion of the topic.
While Cerankowski’s research has done much to shed light on asexuality, she says there’s still much more to be understood: “What I imagine being the next step for my research would be to look through some of those medical and sexological histories and trace a kind of genealogy.”
Source: Leah Stark for Stanford University
If most of our genes are the same as those of chimpanzees, how’d we humans evolve such big brains?
New research shows it’s possible to pick out key changes in the genetic code between chimpanzees and humans and then visualize their respective contributions to early brain development by using mouse embryos.
The team found that humans are equipped with tiny differences in a particular regulator of gene activity, dubbed HARE5, that when introduced into a mouse embryo, led to a 12 percent bigger brain than in the embryos treated with the HARE5 sequence from chimpanzees.
The findings, appearing online in Current Biology, may lend insight into not only what makes the human brain special but also why people get some diseases, such as autism and Alzheimer’s disease, whereas chimpanzees don’t.
“I think we’ve just scratched the surface, in terms of what we can gain from this sort of study,” says Debra Silver, an assistant professor of molecular genetics and microbiology in the Duke University Medical School. “There are some other really compelling candidates that we found that may also lead us to a better understanding of the uniqueness of the human brain.”Tracking down ‘enhancers’
Every genome contains many thousands of short bits of DNA called “enhancers,” whose role is to control the activity of genes. Some of these are unique to humans. Some are active in specific tissues. But none of the human-specific enhancers previously had been shown to influence brain anatomy directly.
In the new study, researchers mined databases of genomic data from humans and chimpanzees, to find enhancers expressed primarily in the brain tissue and early in development. They prioritized enhancers that differed markedly between the two species.
The group’s initial screen turned up 106 candidates, six of them near genes that are believed to be involved in brain development. The group named these “human-accelerated regulatory enhancers,” HARE1 through HARE6.
The strongest candidate was HARE5 for its chromosomal location near a gene called Frizzled 8, which is part of a well-known molecular pathway implicated in brain development and disease. The group decided to focus on HARE5 and then showed that it was likely to be an enhancer for Frizzled8 because the two DNA sequences made physical contact in brain tissue.
The human HARE5 and the chimpanzee HARE5 sequences differ by only 16 letters in their genetic code. Yet, in mouse embryos the researchers found that the human enhancer was active earlier in development and more active in general than the chimpanzee enhancer.Bigger brains
“What’s really exciting about this was that the activity differences were detected at a critical time in brain development: when neural progenitor cells are proliferating and expanding in number, just prior to producing neurons,” Silver says.
The researchers found that in the mouse embryos equipped with Frizzled8 under control of human HARE5, progenitor cells destined to become neurons proliferated faster compared with the chimp HARE5 mice, ultimately leading to more neurons.
As the mouse embryos neared the end of gestation, their brain size differences became noticeable to the naked eye. Graduate student Lomax Boyd started dissecting the brains and looking at them under a microscope.
“After he started taking pictures, we took a ruler to the monitor. Although we were blind to what the genotype was, we started noticing a trend,” Silver says.
All told, human HARE5 mice had brains 12 percent larger in area compared with chimpanzee HARE5 mice. The neocortex, involved in higher-level function such as language and reasoning, was the region of the brain affected.Related Articles On Futurity
- Brandeis UniversityCan boosting retromers keep Alzheimer's at bay?
- University of MelbourneOur brains learn to love music's harmony
- RutgersInsect 'tree of life' shows bugs ruled prehistoric world
Producing a short list of strong candidates was in itself a feat, accomplished by applying the right filters to analysis of human and chimpanzee genomes, says coauthor Gregory Wray, professor of biology and director of the Duke Center for Genomic and Computational Biology.
“Many others have tried this and failed,” Wray says. “We’ve known other people who have looked at genes involved in brain size evolution, tested them out, and done the same kinds of experiments we’ve done and come up dry.”
The team plans to study the human HARE5 and chimp HARE5 mice into adulthood, for possible differences in brain structure and behavior. The group also hopes to explore the role of the other HARE sequences in brain development.
“What we found is a piece of the genetic basis for why we have a bigger brain,” Wray says. “It really shows in sharp relief just how complicated those changes must have been. This is probably only one piece—a little piece.”
The Duke Institute for Brain Sciences, the National Institutes of Health, and the National Science Foundation supported the work.
Source: Duke University