The US economy will continue its steady climb upward over the next two years, adding about 5 million jobs, economists say.
“Washington inflicted quite a bit of short-term damage to the US economy in 2013,” says Daniil Manaenkov of the Research Seminar in Quantitative Economics in the department of economics at the University of Michigan.Related Articles On Futurity
- Education spending hot, foreign aid not
- Michigan State UniversityMore hospital beds can lead to more patients
- Stanford UniversityIn China, nitrogen leaves pollution haze
“Despite a seemingly heavy burden of domestic fiscal austerity and monetary slip-ups, the US economy has demonstrated remarkable resilience. This makes us hopeful that once fiscal headwinds abate, we will see a meaningful acceleration of GDP and payroll job gains.”
In their annual forecast of the US economy, Manaenkov and colleague Matthew Hall predict the creation of more than 5 million jobs over the next two years—2.5 million jobs next year and another 2.8 million during 2015, as unemployment falls from 7.1 percent to about 6 percent during that time.
Overall economic output growth (as measured by real Gross Domestic Product) will ramp up from this year’s rate of 1.7 percent to 2.7 percent in 2014 and 3.1 percent in 2015—the first annual reading above 3 percent since 2005.
In addition to solid growth in GDP and employment over the next two years, the forecast calls for a steadily recovering housing market. The construction of new homes, both single-family and multi-unit housing, will continue to rise from 920,000 units this year to about 1.2 million next year and nearly 1.5 million the year after.Housing sales up, too
Sales of existing single-family homes are expected to rise from about 4.6 million this year to 4.9 million in 2014 and 5.2 million in 2015.
“The housing sector is expected to be a solid contributor to growth in the next two years, reflecting an anticipated increase in household formation, as well as rising incomes,” Manaenkov says. “We expect that with affordable rates and an improved payroll outlook, housing construction will accelerate next year.”
Conventional mortgage rates will edge upward during the forecast period from this year’s 4 percent average rate to 4.6 percent in 2014 and 5 percent the year after.
Other interest rates will remain relatively moderate, as well. The 10-year Treasury note will creep up from 2.3 percent this year to 3 percent next year and 3.5 percent in 2015, while the three-month Treasury bill rate will hold steady at 0.1 percent in 2014 and 0.3 percent the following year.
Core inflation will remain below 2 percent over the next two years, oil prices will hold firm around $95 per barrel through 2015, and light-vehicle sales will steadily rise from 15.5 million units this year to 16 million next year and 16.3 million in 2015.
“Strength in domestically produced vehicles has helped drive the sales recovery since 2009,” Hall says. “And pent-up demand remains strong. The average age of vehicles on the road has continued to rise, and younger drivers, who are more likely to be unemployed or debt-constrained, have been underrepresented in the market to date.
“Together, these facts suggest further growth in vehicle sales in the coming year, as unemployment continues to fall.”
The forecast is based on the Michigan Quarterly Econometric Model of the US Economy.
Source: University of Michigan
The key to reducing hospital admissions may be focusing on the whole patient, rather than a specific condition that caused the hospitalization in the first place, researchers say.
Checking back into the hospital within 30 days of discharge is not only bad news for patients. It’s also bad for hospitals, which now face financial penalties for high readmissions.Related Articles On Futurity
- University of IllinoisGetting the bugs out of giving meds
- University of PittsburghPregnant smokers get subpar help to quit
- iScrub app keeps tabs on hand washing
For a new study published in the British Medical Journal, researchers found that top-performing hospitals—those with the lowest 30-day readmission rates—had fewer readmissions from all diagnoses and time periods after discharge than lower performing hospitals with higher readmissions.
“Our findings suggest that hospitals may best achieve low rates of readmission by employing strategies that lower readmission risk globally rather than for specific diagnoses or time periods after hospitalization,” says lead author Kumar Dharmarajan, a visiting scholar at the Center for Outcomes Research and Evaluation at Yale University School of Medicine and a cardiology fellow at Columbia University Medical Center.
Despite the increased national focus on reducing hospital readmissions, it has not been clear whether hospitals with the lowest readmission rates have been especially good at reducing readmissions from specific diagnoses and time periods after hospitalization, or have instead lowered readmissions more generally.600,000 readmissions
To find out, he and colleagues studied over 4,000 hospitals in the United States caring for older patients hospitalized with heart attacks, heart failure, or pneumonia from 2007 through 2009. The authors examined over 600,000 readmissions occurring within 30 days of hospitalization.
Readmission diagnoses and timing were similar regardless of a hospital’s 30-day readmission rates. High performing hospitals had fewer readmissions across all diagnostic categories and time periods after discharge.
“Earlier data show that patients are readmitted for a broad range of conditions. We have found empirically that hospitals with the lowest readmission rates have reduced readmissions across the board,” Dharmarajan says.
“This study suggests that the path to excellence in readmission is a result of an approach that focuses on the patient as a whole rather than on what caused them to be admitted,” says Harlan Krumholz, professor of medicine and professor of investigative medicine and of public health.
“And this study adds emphasis to the idea that patients are susceptible to a wide range of conditions after a hospitalization—they are a highly vulnerable population and we need to focus intently on making the immediate post-discharge period safer.”
The National Heart, Lung, and Blood Institute funded the study.
Source: Yale University
Clues from a new dating technique that utilizes ancient clam shells suggest that 3,000 to 5,000 years ago Greenland’s ice sheet was the smallest it has been in the past 10,000 years.
“What’s really interesting about this is that on land, the atmosphere was warmest between 9,000 and 5,000 years ago, maybe as late as 4,000 years ago. The oceans, on the other hand, were warmest between 5,000 to 3,000 years ago,” says Jason Briner, University at Buffalo associate professor of geology, who led the study.Related Articles On Futurity
- University of MichiganAerosols haze climate predictions
- University of FloridaExtinct giant turtle found near monster snake
- University of WashingtonPolar sea ice: Down but not out
“What it tells us is that the ice sheets might really respond to ocean temperatures,” he says. “It’s a clue to what might happen in the future as the Earth continues to warm.”
The findings appeared online in the journal Geology.
The study is important not only for illuminating the history of Greenland’s ice sheet, but for providing geologists with an important new tool—a method of using Arctic fossils to deduce when glaciers were smaller than they are today.
Scientists have many techniques for figuring out when ice sheets were larger, but few for the opposite scenario.
“Traditional approaches have a difficult time identifying when ice sheets were smaller,” Briner says. “The outcome of our work is that we now have a tool that allows us to see how the ice sheet responded to past times that were as warm or warmer than present—times analogous to today and the near future.”Ice sheets are like bulldozers
The technique the scientists developed involves dating fossils in piles of debris found at the edge of glaciers. Growing ice sheets are like bulldozers, pushing rocks, boulders, and other detritus into heaps of rubble called moraines.
Because glaciers only do this plowing when they’re getting bigger, logic dictates that rocks or fossils found in a moraine must have been scooped up at a time when the associated glacier was older and smaller.
So if a moraine contains fossils from 3,000 years ago, that means the glacier was growing—and smaller than it is today—3,000 years ago.
This is exactly what the scientists saw in Greenland. They looked at 250 ancient clams from moraines in three western regions, and discovered that most of the fossils were between 3,000 to 5,000 years old. The finding suggests that this was the period when the ice sheet’s western extent was at its smallest in recent history, Briner says.Dating ancient clams
“Because we see the most shells dating to the 5,000- to 3000-year period, we think that this is when the most land was ice-free, when large layers of mud and fossils were allowed to accumulate before the glacier came and bulldozed them up,” he says.
Because radiocarbon dating is expensive, Briner and his colleagues found another way to trace the age of their fossils. Their solution was to look at the structure of amino acids—the building blocks of proteins—in the fossils of ancient clams.
Amino acids come in two orientations that are mirror images of each other, known as D and L, and living organisms generally keep their amino acids in an L configuration. When organisms die, however, the amino acids begin to flip. In dead clams, for example, D forms of aspartic acid start turning to L’s.
Because this shift takes place slowly over time, the ratio of D’s to L’s in a fossil is a giveaway of its age. Knowing this, Briner’s research team matched D and L ratios in 20 Arctic clamshells to their radiocarbon-dated ages to generate a scale showing which ratios corresponded with which ages.
The researchers then looked at the D and L ratios of aspartic acid in the 250 Greenland clamshells to come up with the fossils’ ages.
Amino acid dating is not new, but applying it to the study of glaciers could help scientists better understand the history of ice—and climate change—on Earth.
The National Geographic Society and US National Science Foundation funded the project.
Source: University at Buffalo
The post Clam shells date when Greenland’s ice sheet was smaller appeared first on Futurity.
As scientists continue to debate how organic solar cells convert sunlight into electricity, a recent study suggests the predominant working theory is incorrect.
The findings, published in the journal Nature Materials, could steer future efforts to design materials that boost the performance of organic cells.
“We know that organic photovoltaics are very good,” says study coauthor Michael McGehee, a professor of materials science and engineering at Stanford University. “The question is, why are they so good? The answer is controversial.”What causes the split?
A typical organic solar cell consists of two semiconducting layers made of plastic polymers and other flexible materials. The cell generates electricity by absorbing particles of light, or photons.
When the cell absorbs light, a photon knocks out an electron in a polymer atom, leaving behind an empty space, which scientists refer to as a hole. The electron and the hole immediately form a bonded pair called an exciton.
The exciton splits, allowing the electron to move independently to a hole created by another absorbed photon. This continuous movement of electrons from hole to hole produces an electric current.
In the study, the team addressed a long-standing debate over what causes the exciton to split.
“To generate a current, you have to separate the electron and the hole,” says senior author Alberto Salleo, an associate professor of materials science and engineering at Stanford. “That requires two different semiconducting materials.
If the electron is attracted to material B more than material A, it drops into material B. In theory, the electron should remain bound to the hole even after it drops.
“The fundamental question that’s been around a long time is, how does this bound state split?”The hot effect
One explanation widely accepted by scientists is known as the “hot exciton effect.” The idea is that the electron carries extra energy when it drops from material A to material B. That added energy gives the excited (“hot”) electron enough velocity to escape from the hole.
But that hypothesis did not stand up to experimental tests, according to the team.
“In our study, we found that the hot exciton effect does not exist,” Salleo says. “We measured optical emissions from the semiconducting materials and found that extra energy is not required to split an exciton.”
So what actually causes electron-hole pairs to separate?
“We haven’t really answered that question yet,” Salleo says. “We have a few hints. We think that the disordered arrangement of the plastic polymers in the semiconductor might help the electron get away.”
In a recent study, Salleo discovered that disorder at the molecular level actually improves the performance of semiconducting polymers in solar cells. By focusing on the inherent disorder of plastic polymers, researchers could design new materials that draw electrons away from the solar cell interface where the two semiconducting layers meet, he adds.
“In organic solar cells, the interface is always more disordered than the area further away,” Salleo explains. “That creates a natural gradient that sucks the electron from the disordered regions into the ordered regions. ”More efficient
The solar cells used in the experiment have an energy-conversion efficiency of about 9 percent. The Stanford team hopes to improve that performance by designing semiconductors that take advantage of the interplay between order and disorder.
“To make a better organic solar cell, people have been looking for materials that would give you a stronger hot exciton effect,” Salleo says. “They should instead try to figure out how the electron gets away without it being hot. This idea is pretty controversial. It’s a fundamental shift in the way people think about photocurrent generation.”
Contributors include researchers from the University of Potsdam; the Institute for Applied Photophysics; the University of California, Berkeley; the King Abdullah University of Science and Technology; the Colorado School of Mines; and the University of Oxford.
The Stanford Center for Advanced Molecular Photovoltaics and the US Department of Energy supported the work.
Source: Stanford University
A cave system in southeastern Arizona is home to a diversity of microorganisms that rival microbial communities on the earth’s surface.
Kartchner Caverns is known for its untouched formations, sculpted over millennia by groundwater dissolving the bedrock and carving out underground rooms and passages.Related Articles On Futurity
- Cornell UniversityFalse alarm can spark autoimmune disease
- Indiana UniversityFungus in golf-course grass wreaks havoc
- Iowa State UniversityIn grasslands, every species matters
“We discovered all the major players that make up a typical ecosystem,” says Julie Neilson, associate research scientist at the University of Arizona. “From producers to consumers, they’re all there, just not visible to the naked eye.”
For the study published in the International Society for Microbial Ecology Journal, Neilson and colleagues swabbed stalactites and other cave formations for DNA analysis. Based on the genes they found in their samples, they reconstructed the bacteria and archaea—single-celled microorganisms that lack a cell nucleus—living in the cave.
“We didn’t expect to find such a thriving ecosystem feasting on the scraps dripping in from the world above,” Neilson says. “What is most interesting is that what we found mirrors the desert above: an extreme environment starved for nutrients, yet flourishing with organisms that have adapted in very unique ways to this type of habitat.”
In the absence of light, bacteria live off water runoff dripping into the cave through cracks in the overlying rock and harvest the energy locked in compounds leaching out from decaying organic matter in the soils above and minerals dissolved within the rock fissures.
“Kartchner is unique because it is a cave in a desert ecosystem,” Neilson says. “It’s not like the caves in temperate areas such as in Kentucky or West Virginia, where the surface has forests, rivers, and soil with thick organic layers, providing abundant organic carbon. Kartchner has about a thousand times less carbon coming in with the drip water.
“The cave microbes make a living off the extremely limited nutrients that are available,” she says. “Instead of relying on organic carbon, which is a very scarce resource in the cave, they use the energy in nitrogen-containing compounds like ammonia and nitrite to convert carbon dioxide from the air into biomass.”
The researchers found evidence of cave microbes engaging in all six known pathways that organisms use to fix carbon from the atmosphere to make food and structural material.Rocks for food
Although the nitrogen-driven pathway is probably the most dominant in the cave, there might be others. Some microbes even eat rock—to derive energy from chemical compounds such as manganese or pyrite.
The team expected to find the overall microbial diversity in the cave to be only a fraction of that found in the soil on the surface, says Raina Maier, professor in the department of soil, water, and environmental science and a member of the BIO5 Institute.
“We expected the surface community many times more diverse than the cave. Instead, we found the cave is about half as diverse as the terrestrial environment where there is sunlight and soil and vegetation.
“At the same time, the two ecosystems share only 16 percent of the microbial species. In other words, there is a difference of 84 percent between the two, which is amazing.”
Previous studies had shown that, to the cave microbes, the stalactite they live on is like an island: Restricted to the stalactite they happen to be on, there appears to be little mixing between populations, resulting in different assemblages from one cave formation to another.
Rod Wing, professor in the department of plant sciences and director of the Arizona Genomics Institute at BIO5 helped Neilson and colleagues analyze the DNA swabbed from the cave formations.Barely enough DNA
“When you work in extreme starving environments, you barely get enough DNA,” Neilson says. “In some of our samples we got about half of what is considered the minimum amount for DNA sequence analysis. But, we said, let’s just try it.”
She said technicians in Wing’s lab “managed to get us a data set even from the dry rock, where there is no drip water and where there are very few microbes living to begin with.”
In addition to encountering all the major players that make up a complex food web in the cave, the scientists discovered what likely are microbes yet unknown to science.
“Twenty percent of the bacteria whose presence we inferred based on the DNA sequences were not similar enough to anything in the database for us to be able to identify them,” Neilson says.
“On one stalactite, we found a rare organism in a microbial group called SBR1093 that comprised about 10 percent of the population on that stalactite, but it represented less than 0.5 percent of the microbes on any of the others.”
Nobody has been able to culture that organism in the lab, and its DNA sequence has only ever been found three times in history: in a stromatolite—a special type of sedimentary rock involving microbial communities—in the hypersaline waters of Shark Bay in Australia; in a site contaminated with hydrocarbons in France; and in a sewage treatment plant in Brisbane, Australia.Beyond Earth
“This suggests there are many microbes out there in the world that we know almost nothing about,” she said. “The fact that these organisms showed up in contaminated soil could mean they might have potential for applications such as environmental remediation.
“The most abundant microbe that we found in our taxonomic survey was closely related to a microbe that produces erythromycin, an antibiotic.
“That is not what it is doing in the cave, but it shows you that not only is there a potential to find microbes that are new to science, but studying them in those extreme and poorly studied environments could lead to new applications.”
The implications of the research reach far beyond Kartchner Caverns, as far as other planets, Neilson says.
“There is a lot we have to learn about microbes and how they control processes of global importance, and by studying microbes in extreme ecosystems such as Kartchner Caverns or in the Atacama Desert in Chile, it helps us study some of the capabilities we don’t yet understand in rich ecosystems here on the surface. It shows the flexibility of microbes. They have conquered every niche on the planet.”
“When you think about exploring Mars,” Maier says, “and you look at all those clever strategies that microbes have evolved and tweaked over the past 4 billion years, I wouldn’t be surprised if we found them elsewhere if we just keep looking.”
The National Science Foundation Microbial Observatory helped fund the project.
Source: University of Arizona
The post In desert cave, microbes feed on water, rocks, and air appeared first on Futurity.
There are big debates over the best teaching strategies, but in reality, improving education is not as simple as choosing one technique over another.
Scholars scoured the educational research landscape and found that because improved learning depends on many different factors, there are actually more than 205 trillion instructional options available.Related Articles On Futurity
- Johns Hopkins UniversityElmo effect boosts literacy in Indonesia
- Princeton UniversityWhy do college grads get fewer colds?
- University of VirginiaWith federal funds, Calif. schools improve
Published in the journal Science, the study breaks down exactly how complicated improving education really is when considering the combination of different dimensions—spacing of practice, studying examples or practicing procedures, to name a few—with variations in ideal dosage and in student needs as they learn.
The researchers offer a fresh perspective on educational research by focusing on conclusive approaches that truly affect classroom learning.
“There are not just two ways to teach, as our education debates often seem to indicate,” says lead author Ken Koedinger, professor of human-computer interaction at Carnegie Mellon, director of the Pittsburgh Science of Learning Center (PSLC).
“There are trillions of possible ways to teach. Part of the instructional complexity challenge is that education is not ‘one size fits all,’ and optimal forms of instruction depend on details, such as how much a learner already knows and whether a fact, concept, or thinking skill is being targeted.”Too many possibilities
For the paper, Koedinger, Julie Booth, and David Klahr investigated existing education research to show that the space is too vast, with too many possibilities for simple studies to determine what techniques will work for which students at different learning points.
“As learning researchers, we get frustrated when our work doesn’t seem to make an impact on the education system,” says Booth, assistant professor of educational psychology at Temple who received her PhD in psychology from Carnegie Mellon.
“But much of the work on these learning principles has been conducted in laboratory settings. We need to shift our focus to determine when and for whom these techniques work in real-world classrooms.”
To tame instructional complexity and maximize the potential of improving research behind educational practice and student learning, the researchers offer five recommendations:
- Because trying more than 205 trillion educational options to find out what works best is impossible, research should focus on how different forms of instruction meet different functional needs, such as which methods are best for learning to remember facts, which are best for learning to induce general skills, and which are best for learning to make sense of concepts and principles.
- More experiments are needed to determine how different instructional techniques enhance different learning functions. For example, the optimal way to memorize facts may be a poor way to learn to induce general skills.
- Take advantage of educational technology to further understand how people learn and which instructional dimensions can or cannot be treated independently by conducting massive online studies, which use thousands of students and test hundreds of variations of instruction at the same time.
- To understand impact, build a national data infrastructure in which data collected at a moment-by-moment basis (i.e., cognitive tutors tracking daily how a student learns algebra over a school year) can be linked with longer-term results, such as state exams and performances in a next class.
- Create more permanent school and research partnerships to facilitate interaction between education, administration and researchers. For example, the PSLC, funded by the National Science Foundation (NSF), gives teachers immediate feedback and allows researchers to explore only relevant theories.
“These recommendations are just one of the many steps needed to nail down what’s necessary to really improve education and to expand our knowledge of how students learn and how to best teach them,” says Klahr, professor of psychology at Carnegie Mellon.
The NSF and US Department of Education funded the research.
Source: Carnegie Mellon University
The post There are 205 trillion ways to teach people to learn appeared first on Futurity.
A fun work environment may be good for employee retention, but not for a company’s bottom line.
Related Articles On Futurity
- University of California, DavisHappy teens steer clear of crime, drugs
- Duke UniversityWhy we quit when others succeed
- McGill UniversityEverybody's working for the 'weekend effect'
But it also reduces productivity, which can negatively affect sales performance.
“In the hospitality industry, employee turnover is notoriously high because restaurant jobs are highly substitutable—if you don’t like your job at Chili’s you can go to TGI Friday’s down the street,” says Michael J. Tews, assistant professor of hospitality management at Penn State.
“High employee turnover is consistently quoted as being one of the problems that keeps managers up at night because if you’re involved with recruiting and training constantly, then you can’t focus on effectively managing your existing staff and providing a high-quality service experience.”
For a new study, researchers surveyed 195 restaurant servers from a casual-theme restaurant chain in the United States. The survey included items related to different aspects of fun at work, including “fun activities” and “manager support for fun.” The survey responses were then compared to sales performance and turnover data.All about fun
In the survey, questions related to “fun activities” focused on social events, such as holiday parties and picnics; team building activities, such as company-sponsored athletic teams; competitions, such as sales contests; public celebrations of work achievements; and recognition of personal milestones, such as birthdays and weddings.
Examples of survey items related to “manager support for fun” included asking participants to rate the extent to which they agreed to statements, such as “My managers care about employees having fun on the job” and “My managers try to make working here fun.”
“Manager support for fun” does not necessarily align with “fun activities,” Tews says. For example, “fun activities” may be created by upper-level managers or even by staff members and may or may not be supported by local managers.
Published in the November issue of Cornell Hospitality Quarterly, the research yielded three key findings:
- Manager support for fun lowers turnover, particularly among younger employees.
- Fun activities increase sales performance, particularly among older employees.
- Manager support for fun lowers sales performance irrespective of age.
“The question becomes, is the productivity loss associated with manager support for fun worth the significant reduction in employee turnover?” Tews says.
“We think if you have both manager support for fun and fun activities, the dip you see in productivity as a result of manager support for fun may be canceled out by the increase in productivity you see as a result of fun activities. In this scenario, you also see the greatest reduction in employee turnover.
“For younger employees, a manager allowing them to have fun on the job is important because fun leads to the development of friendships,” Tews says. “As you mature and get a little older, while it is still good to have cordial work relationships, friendships at work are less important because you have other interests, such as family interests.
“The take-home message is that fun can work, but it’s not a panacea,” Tews says. “You really have to think about what outcome you are trying to achieve, and you also have to consider the characteristics of your workers.”
In the future, Tews and his colleagues plan to examine what makes a work activity fun and why some people enjoy participating in these fun work activities, while others don’t.
Researchers from Loyola University of Maryland and Ohio State University contributed to the study, which was supported by the Caesars Hospitality Research Center at the University of Nevada, Las Vegas.
Source: Penn State
When neuroscientists recorded single neurons firing in the brains of people with autism, they discovered that specific neurons in the amygdala region show reduced processing of the eye region of faces.
These same neurons responded more to mouths than did the neurons seen in the control-group individuals.
“We found that single brain cells in the amygdala of people with autism respond differently to faces in a way that explains many prior behavioral observations,” says Ralph Adolphs, a professor of psychology, neuroscience, and biology at the California Institute of Technology (Caltech).
“We believe this shows that abnormal functioning in the amygdala is a reason that people with autism process faces abnormally.”
Adolphs is co-author of the study published in the journal Neuron.Epilepsy and autism
The amygdala has long been known to be important for the processing of emotional reactions. To make recordings from this part of the brain, Adolphs and colleagues recruited patients with epilepsy who had electrodes implanted in their medial temporal lobes—the area of the brain where the amygdala is located—to help identify the origin of their seizures.
Epileptic seizures are caused by a burst of abnormal electric activity in the brain, which the electrodes are designed to detect. It turns out that epilepsy and autism spectrum disorder (ASD) sometimes go together, and so the researchers were able to identify two of the epilepsy patients who also had a diagnosis of ASD.
By using the implanted electrodes to record the firings of individual neurons, the researchers were able to observe activity as participants looked at images of different facial regions, and then correlate the neuronal responses with the pictures.Eyes vs. mouth
In the control group of epilepsy patients without autism, the neurons responded most strongly to the eye region of the face, whereas in the two ASD patients, the neurons responded most strongly to the mouth region. Moreover, the effect was present in only a specific subset of the neurons. In contrast, a different set of neurons showed the same response in both groups when whole faces were shown.
“It was surprising to find such clear abnormalities at the level of single cells,” explains lead author Ueli Rutishauser, assistant professor in the departments of neurosurgery and neurology at Cedars-Sinai Medical Center and visiting associate in biology at Caltech. “We, like many others, had thought that the neurological abnormalities that contribute to autism were spread throughout the brain, and that it would be difficult to find highly specific correlates.
“Not only did we find highly specific abnormalities in single-cell responses, but only a certain subset of cells responded that way, while another set showed typical responses to faces. This specificity of these cell populations was surprising and is, in a way, very good news, because it suggests the existence of specific mechanisms for autism that we can potentially trace back to their genetic and environmental causes, and that one could imagine manipulating for targeted treatment.”
“We can now ask how these cells change their responses with treatments, how they correspond to similar cell populations in animal models of autism, and what genes this particular population of cells expresses,” adds Adolphs.
To validate their results, the researchers hope to identify and test additional subjects, which is a challenge because it is very hard to find people with autism who also have epilepsy and who have been implanted with electrodes in the amygdala for single-cell recordings, says Adolphs.
“At the same time, we should think about how to change the responses of these neurons, and see if those modifications correlate with behavioral changes,” he says.
The Simons Foundation, the Gordon and Betty Moore Foundation, the Cedars-Sinai Medical Center, Autism Speaks, and the National Institute of Mental Health supported the work.
The post For people with autism, neurons ‘see’ faces abnormally appeared first on Futurity.
Engineers have demonstrated a thin, scalable invisibility cloak that can adapt to different types and sizes of objects.
The researchers designed and tested a new approach to cloaking—by surrounding an object with small antennas that collectively radiate an electromagnetic field.
The radiated field cancels out any waves scattering off the cloaked object. Their paper appears in Physical Review X.
“We’ve taken an electrical engineering approach, but that’s what we are excited about,” says Professor George Eleftheriades of the University of Toronto. “It’s very practical.”How it works
Picture a mailbox sitting on the street. When light hits the mailbox and bounces back into your eyes, you see the mailbox. When radio waves hit the mailbox and bounce back to your radar detector, you detect the mailbox.Related Articles On Futurity
- Yale UniversityAnti-lasers: Latest zap! technology
- California Institute of TechnologyHow to unleash metamaterials
- University of LeedsT-ray laser pulses advance imaging
Eleftheriades and PhD student Michael Selvanyagam’s system wraps the mailbox in a layer of tiny antennas that radiate a field away from the box, cancelling out any waves that would bounce back. In this way, the mailbox becomes undetectable to radar.
“We’ve demonstrated a different way of doing it,” says Eleftheriades. “It’s very simple: instead of surrounding what you’re trying to cloak with a thick metamaterial shell, we surround it with one layer of tiny antennas, and this layer radiates back a field that cancels the reflections from the object.”
Their experimental demonstration effectively cloaked a metal cylinder from radio waves using one layer of loop antennas. The system can be scaled up to cloak larger objects using more loops, and Eleftheriades says the loops could become printed and flat, like a blanket or skin.
Currently the antenna loops must be manually attuned to the electromagnetic frequency they need to cancel, but in future they could function both as sensors and active antennas, adjusting to different waves in real time, much like the technology behind noise-canceling headphones.Better cloaking
Work on developing a functional invisibility cloak began around 2006, but early systems were necessarily large and clunky—if you wanted to cloak a car, for example, in practice you would have to completely envelop the vehicle in many layers of metamaterials in order to effectively “shield” it from electromagnetic radiation.
The sheer size and inflexibility of that approach makes it impractical for real-world uses. Earlier attempts to make thin cloaks were not adaptive and active, and could work only for specific small objects.Forging ‘signatures’
Beyond obvious applications, such as hiding military vehicles or conducting surveillance operations, this cloaking technology could eliminate obstacles—for example, structures interrupting signals from cellular base stations could be cloaked to allow signals to pass by freely.
The system can also alter the signature of a cloaked object, making it appear bigger, smaller, or even shifting it in space. And though their tests showed the cloaking system works with radio waves, re-tuning it to work with Terahertz (T-rays) or light waves could use the same principle as the necessary antenna technology matures.
“There are more applications for radio than for light,” says Eleftheriades. “It’s just a matter of technology—you can use the same principle for light, and the corresponding antenna technology is a very hot area of research.”
Source: University of Toronto
Compared with slimmer counterparts, obese mice had fewer taste cells that responded to sweet and bitter stimuli, according to a new study. In addition, the cells that did respond to sweetness reacted relatively weakly.
“Studies have shown that obesity can lead to alterations in the brain, as well as the nerves that control the peripheral taste system, but no one had ever looked at the cells on the tongue that make contact with food,” says lead scientist Kathryn Medler, associate professor of biological sciences at the University at Buffalo.Related Articles On Futurity
- Washington University in St. LouisWhy losing 2-4 pounds seems easier than losing 3
- University of MissouriWith exercise, obese women may need to go hard
- Penn StateStress hormone raises obesity risk in girls
“What we see is that even at this level—at the first step in the taste pathway—the taste receptor cells themselves are affected by obesity,” Medler says. “The obese mice have fewer taste cells that respond to sweet stimuli, and they don’t respond as well.”
How an inability to detect sweetness might encourage weight gain is unclear, but past research has shown that obese people yearn for sweet and savory foods though they may not taste these flavors as well as thinner people.
Medler says it’s possible that trouble detecting sweetness may lead obese mice to eat more than their leaner counterparts to get the same payoff.
Learning more about the connection between taste, appetite, and obesity is important, she says, because it could lead to new methods for encouraging healthy eating.
“If we understand how these taste cells are affected and how we can get these cells back to normal, it could lead to new treatments,” Medler says. “These cells are out on your tongue and are more accessible than cells in other parts of your body, like your brain.”
The study, published in PLOS ONE, compares 25 normal mice to 25 of their littermates who were fed a high-fat diet and became obese.
To measure the animals’ response to different tastes, the research team looked at a process called calcium signaling. When cells “recognize” a certain taste, there is a temporary increase in the calcium levels inside the cells, and the scientists measured this change.
The results: Taste cells from the obese mice responded more weakly not only to sweetness but also to bitterness. Taste cells from both groups of animals reacted similarly to umami, a flavor associated with savory and meaty foods.
Source: University at Buffalo
Calcite crust growing among layers of Arctic seafloor algae offers a look at almost 650 years of annual change in sea ice cover and may help improve modeling for future climate change.
“This is the first time coralline algae have been used to track changes in Arctic sea ice,” says Jochen Halfar, associate professor of chemical and physical sciences at the University of Toronto Mississauga. “We found the algal record shows a dramatic decrease in ice cover over the last 150 years.”Related Articles On Futurity
- University of QueenslandIn acidic ocean, baby coral loses its way
- Duke UniversityWill stored CO2 leak into drinking water?
- University of Colorado at BoulderIcy mile leads to climate future
For a new study published in Proceedings of the National Academy of Sciences, Halfar and colleagues collected and analyzed samples of the alga Clathromorphum compactum.
This long-lived plant species forms thick rock-like calcite crusts on the seafloor in shallow waters 15 to 17 meters deep. It is widely distributed in the Arctic and sub-Arctic Oceans.
Divers retrieved the specimens from near-freezing seawater during several research cruises led by Walter Adey from the Smithsonian Institution.
The algae’s growth rates depend on the temperature of the water and the light they receive. As snow-covered sea ice accumulates on the water over the algae, it turns the sea floor dark and cold, stopping the plants’ growth. When the sea ice melts in the warm months, the algae resume growing their calcified crusts.
This continuous cycle of dormancy and growth results in visible layers that can be used to determine the length of time the algae were able to grow each year during the ice-free season.Algae age
“It’s the same principle as using rings to determine a tree’s age and the levels of precipitation,” Halfar says. “In addition to ring counting, we used radiocarbon dating to confirm the age of the algal layers.”
After cutting and polishing the algae, Halfar used a specialized microscope to take thousands of images of each sample that were combined to give a complete overview of the fist-sized specimens.
Researchers corroborated the length of the algal growth periods through the magnesium levels preserved in each growth layer.
The amount of magnesium is dependent on both the light reaching the algae and the temperature of the sea water. Longer periods of open and warm water result in a higher amount of algal magnesium.
During the Little Ice Age, a period of global cooling that lasted from the mid-1500s to the mid-1800s, the algae’s annual growth increments were as narrow as 30 microns due to the extensive sea ice cover, Halfar says.Unprecedented sea ice decline
However, since 1850, the thickness of the algae’s growth increments have more than doubled, bearing witness to an unprecedented decline in sea ice coverage that has accelerated in recent decades.
The coralline algae represent not only a new method for climate reconstruction, but are vital to extending knowledge of the climate record back in time to permit more accurate modeling of future climate change.
Today, observational information about annual changes in the Earth’s temperature and climate go back 150 years. Reliable information about sea-ice coverage comes from satellites and dates back only to the late 1970s.
“In the north, there is nothing in the shallow oceans that tells us about climate, water temperature, or sea ice coverage on an annual basis,” says Halfar. “These algae, which live over a thousand years, can now provide us with that information.”
The Natural Sciences and Engineering Research Council of Canada and Ecological Systems Technology supported the research.
Source: University of Toronto
The post Arctic ‘tree rings’ show 650 years of sea ice change appeared first on Futurity.
One type of giant clam turns out to be two separate species, report researchers who discovered the new species on reefs in the Solomon Islands and at Ningaloo in Western Australia.
Jude Keyse, a postgraduate student at the University of Queensland School of Biological Sciences, says the find was surprising.Related Articles On Futurity
- University of FloridaSaber-tooth cat was a Florida native
- Mammal-ish critter walked like an armadillo
- University of RochesterMadagascar's new species pop up less often
“DNA sequences strongly suggest that a distinct and unnamed species of giant clam has been hiding literally in plain sight, looking almost the same as the relatively common Tridacna maxima,” says Keyse.
“Giant clams can grow up to 230 kilograms (507 pounds) and are some of the most recognizable animals on coral reefs, coming in a spectrum of vibrant colors including blues, greens, browns, and yellow hues.”
Co-author Shane Penny, a postgraduate student at Charles Darwin University, says, “To correctly describe the new species now becomes critical as the effects of getting it wrong can be profound for fisheries, ecology, and conservation.”
Divers and snorkelers prize the giant clams, which are also a source of meat and shells.
Overconsumption by humans has depleted giant clams populations in many areas and most giant clam species are on the International Union for Conservation of Nature (IUCN) Red List of Threatened Species.
Keyse says the discovery of a new species had implications for management of giant clams.
“What we thought was one breeding group has turned out to be two, making each species even less abundant than previously thought,” she says.
The findings appear in PLOS ONE.
Source: University of Queensland
Heavy drinking is bad for a marriage if one spouse drinks, but not both, according to a new long-term study.
Researchers followed 634 couples from the time of their weddings through the first nine years of marriage and found that couples where only one spouse was a heavy drinker had a much higher divorce rate than other couples.
But if both spouses were heavy drinkers, the divorce rate was the same as for couples where neither were heavy drinkers. “Heavy drinking” was defined as drinking six or more drinks at one time or drinking to intoxication.
Related Articles On Futurity
- Cornell UniversityCasual sex linked to teen depression
- Cornell UniversityDivorce fears keep couples unhitched
- Washington University in St. LouisLower legal age may lead to adult binge drinking
“Our results indicate that it is the difference between the couple’s drinking habits, rather than the drinking itself, that leads to marital dissatisfaction, separation, and divorce,” says Kenneth Leonard, director of the University at Buffalo’s Research Institute on Addictions and lead author of the study.
Over the course of the nine-year study, which will appear in the December issue of Psychology of Addictive Behaviors, nearly 50 percent of couples where only one partner drank more heavily wound up divorcing, while the divorce rates for other couples was only 30 percent.
“This research provides solid evidence to bolster the commonplace notion that heavy drinking by one partner can lead to divorce,” Leonard says. “Although some people might think that’s a likely outcome, there was surprisingly little data to back up that claim until now.”
The surprising outcome was that the divorce rate for two heavy drinkers was no worse than for two non-heavy drinkers. “Heavy drinking spouses may be more tolerant of negative experiences related to alcohol due to their own drinking habits,” Leonard says. But he cautioned that this does not mean other aspects of family life are unimpaired. “While two heavy drinkers may not divorce, they may create a particularly bad climate for their children.”
The researchers also found a slightly higher divorce rate in cases when the heavy drinker was the wife, rather than the husband. Leonard cautions that this difference is based on only a few couples in which the wife was a heavy drinker, but the husband was not, and that the finding was not statistically significant. He suggests that if this difference is supported by further research, it might be because men view heavy drinking by their wives as going against proper gender roles for women, leading to more conflict.
The study controlled for factors such as marijuana and tobacco use, depression, and socioeconomic status, which can also be related to marital dissatisfaction, separation, and divorce.
“Ultimately, we hope our findings will be helpful to marriage therapists and mental health practitioners who can explore whether a difference in drinking habits is causing conflicts between couples seeking help,” Leonard says.
The National Institute on Alcohol Abuse and Alcoholism supported the study.
Source: University at Buffalo
Archaeologists have discovered a 3,700-year-old storeroom, once full of wine flavored with mint, honey, and dashes of psychotropic resins.
They unearthed what may be the oldest—and largest—ancient wine cellar in the Near East, containing forty jars, each of which would have held fifty liters.
They discovered the cellar in the ruined palace of a sprawling Canaanite city in northern Israel, called Tel Kabri. The site dates to about 1,700 BCE and isn’t far from many of Israel’s modern-day wineries.
“This is a hugely significant discovery—it’s a wine cellar that, to our knowledge, is largely unmatched in age and size,” says Eric Cline of at George Washington University, co-director of the excavation with Assaf Yasur-Landau of University of Haifa.‘This wasn’t moonshine’
Andrew Koh, an archaeological scientist at Brandeis University, analyzed the jar fragments using organic residue analysis.
He found molecular traces of tartaric and syringic acid, both key components in wine, as well as compounds suggesting ingredients popular in ancient wine-making, including honey, mint, cinnamon bark, juniper berries, and resins. The recipe is similar to medicinal wines used in ancient Egypt for two thousand years.
Koh also analyzed the proportions of each diagnostic compound and discovered remarkable consistency between jars.
“This wasn’t moonshine that someone was brewing in their basement, eyeballing the measurements,” notes Koh, assistant professor of classical studies. “This wine’s recipe was strictly followed in each and every jar.”Wine cellar for banquets
Important guests drank this wine, notes Yasur-Landau. “The wine cellar was located near a hall where banquets took place, a place where the Kabri elite and possibly foreign guests consumed goat meat and wine,” he says.
At the end of the season, the team discovered two doors leading out of the wine cellar—one to the south, and one to the west. Both probably lead to additional storage rooms.
The team will present their findings this Friday in Baltimore, Maryland, at the annual meeting of the American Schools of Oriental Research.
Source: Brandeis University
The post Ancient jars held 2,000 liters of strong, sweet wine appeared first on Futurity.
Scientists gathering seismic data in West Antarctica recently discovered a simmering volcano about a kilometer under the ice. If it erupts, the volcano will melt a lot of ice.
“We weren’t expecting to find anything like this,” says Doug Wiens, professor of earth and planetary sciences at Washington University in St. Louis.
In January 2010, Wiens and colleagues set up two crossing lines of seismographs across Marie Byrd Land in West Antarctica. It was the first time they had deployed so many instruments in the interior of the continent that could operate year-round even in the coldest parts of Antarctica.
Like a giant CT machine, the seismograph array used disturbances created by distant earthquakes to make images of the ice and rock deep within West Antarctica.
The goal, says Wiens, was essentially to weigh the ice sheet to help reconstruct Antarctica’s climate history.Burden of ice
But to do this accurately, scientists had to know how the earth’s mantle would respond to an ice burden, and that depended on whether it was hot and fluid or cool and viscous. The seismic data would allow them to map the mantle’s properties.
In the meantime, automated-event-detection software was put to work to comb the data for anything unusual. When it found two bursts of seismic events between January 2010 and March 2011, Amanda Lough, a PhD student working with Wiens, looked more closely to see what was rattling the continent’s bones.
Was it rock grinding on rock, ice groaning over ice, or, perhaps, hot gases and liquid rock forcing their way through cracks in a volcanic complex? Uncertain at first, the more researchers looked, the more convinced they became that a new volcano was forming a kilometer beneath the ice.
Their findings on the as-yet-unnamed volcano are published in an advance online issue of Nature Geoscience.Not just a coincidence Related Articles On Futurity
- California Institute of TechnologyAt both poles, ice melt is speeding up
- University at BuffaloFleeting volcano erupts once, then dies
- Columbia UniversityTelescope built to capture big bang light
The teams that install seismographs in Antarctica are given first crack at the data. Lough has traveled to East Antarctica three times to install or remove stations.
In 2010, many of the instruments were moved to West Antarctica, and Wiens asked Lough to look at the seismic data coming in, the first large-scale dataset from this part of the continent.
“I started seeing events that kept occurring at the same location, which was odd,” Lough says. “Then I realized they were close to some mountains, but not right on top of them. My first thought was, ‘OK, maybe it’s just coincidence.’
“But then I looked more closely and realized that the mountains were actually volcanoes and there was an age progression to the range. The volcanoes closest to the seismic events were the youngest ones.”
The events were weak and very low frequency, which strongly suggested they weren’t tectonic in origin. While low-magnitude seismic events of tectonic origin typically have frequencies of 10 to 20 cycles per second, this shaking was dominated by frequencies of 2 to 4 cycles per second.Way too deep
But glacial processes can generate low-frequency events. If the events weren’t tectonic, could they be glacial?
To probe further, Lough used a global computer model of seismic velocities to “relocate” the hypocenters of the events to account for the known seismic velocities along different paths through the Earth. This procedure collapsed the swarm clusters to a third their original size.
It also showed that almost all of the events had occurred at depths of 25 to 40 kilometers (15 to 25 miles below the surface). This is extraordinarily deep—deep enough to be near the boundary between the earth’s crust and mantle, called the Moho, which more or less rules out a glacial origin—and also casts doubt on a tectonic one.
“A tectonic event might have a hypocenter 10 to 15 kilometers (6 to 9 miles) deep, but at 25 to 40 kilometers, these were way too deep,” Lough says.
A colleague suggested that the event waveforms looked like Deep Long Period earthquakes, or DPLs, which occur in volcanic areas, have the same frequency characteristics, and are as deep. “Everything matches up,” Lough says.Ash under ice
The seismologists also talked to scientists Duncan Young and Don Blankenship, of the University of Texas at Austin, who fly airborne radar over Antarctica to produce topographic maps of the bedrock.
“In these maps, you can see that there’s elevation in the bed topography at the same location as the seismic events,” Lough says.
The radar images also showed a layer of ash buried under the ice. “They see this layer all around our group of earthquakes and only in this area,” Lough says.
“Their best guess is that it came from Mount Waesche, an existing volcano near Mount Sidley. But that is also interesting because scientists had no idea when Mount Waesche was last active, and the ash layer sets the age of the eruption at 8,000 years ago.”
Researchers say a case for a volcanic origin has been made but they still aren’t sure what’s causing the seismic activity.Hidden hot spot
“Most mountains in Antarctica are not volcanic,” Wiens says, “but most in this area are. Is it because East and West Antarctica are slowly rifting apart? We don’t know exactly. But we think there is probably a hot spot in the mantle here producing magma far beneath the surface.”
“People aren’t really sure what causes DPLs,” Lough says. “It seems to vary by volcanic complex, but most people think it’s the movement of magma and other fluids that leads to pressure-induced vibrations in cracks within volcanic and hydrothermal systems.”
The new volcano will definitely erupt, researchers say. “In fact, because the radar shows a mountain beneath the ice, I think it has erupted in the past, before the rumblings we recorded.”
The scientists calculated that an enormous eruption, one that released 1,000 times more energy than the typical eruption, would be necessary to breach the ice above the volcano. On the other hand, a subglacial eruption and the accompanying heat flow will melt a lot of ice.
“The volcano will create millions of gallons of water beneath the ice—many lakes full,” Wiens says. The water will rush beneath the ice toward the sea and feed into the hydrological catchment of the MacAyeal Ice Stream, one of several major ice streams draining ice from Marie Byrd Land into the Ross Ice Shelf.
By lubricating the bedrock, it will speed the flow of the overlying ice, perhaps increasing the rate of ice-mass loss in West Antarctica, Wiens says.
The National Science Foundation, Division of Polar Programs funded the work.
The post Volcano deep under Antarctica could melt lots of ice appeared first on Futurity.
When the core of a massive star collapses, it can eject a jet of gas into space at nearly the speed of light. Collisions between the fast-moving gas and its surroundings, as well as within the jet itself, create gamma rays.
This past April, an incredibly bright flash of light burst from near the constellation Leo. It has now been confirmed as the brightest gamma ray burst ever observed.
Astronomers around the world were able to view the blast in unprecedented detail and observe several aspects of the event for the first time ever. The data could lead to a rewrite of standard theories of how gamma ray bursts work.
Named GRB 130427A, the blast was observed by several space- and ground-based telescopes, and the data was analyzed by dozens of astronomers around the world. The Fermi Gamma-ray Space Telescope was the first to detect the event, and it quickly began monitoring the flood of radiation using its Large Area Telescope (LAT), whose principal investigator is Peter Michelson, a physics professor at Stanford University and the SLAC National Accelerator Laboratory. Michelson leads the international collaboration that built and operates the LAT.
Fermi’s quick action, allowing the LAT to record nearly the entire event, yielded incredible data that revealed previously unknown aspects of the mechanisms involved in a gamma ray burst.
Several features of GRB 130427A combined to make it of particular interest to astronomers.
First, its light traveled 3.6 billion years before arriving at Earth, about one-third the travel time for light from typical bursts. The record-setting 20 hours that the LAT observed gamma rays was longer than any other observed GRB. And, in addition to being the brightest GRB ever witnessed, it was also one of the most energetic.Related Articles On Futurity
- California Institute of TechnologyMars rover uncovers a surprisingly Earth-like rock
- Johns Hopkins UniversityFlyby radar maps Saturn's Earth-like moon
- Texas A&M UniversityThis is the most distant galaxy ever found
“When that happens, we start seeing features that we were not able to observe before,” says Nicola Omodei, a research associate at Stanford’s Hansen Experimental Physics Laboratory, who led LAT data analysis for one of the Science papers. “Especially because it was very bright, you can uncover features that were not predicted by the standard models.”
The leading theory explaining long gamma ray bursts such as GRB 130427A posits that they are created during the most energetic explosions in the cosmos, which occur when a very massive star collapses on itself.
These explosions erupt a jet of elementary particles traveling at close to the speed of light. Within the jet, pressure, temperature and density are not uniform, creating internal shock waves that move inward and outward as faster regions within the jet collide with slower ones.
As the jet travels outward, it collides with the interstellar medium to create additional shock waves, called “external shocks.”
Although details are not well understood, particles are accelerated at the shock front and, at the same time, interact with the surrounding electromagnetic fields. This causes particles to lose part of their energy emitting photons, through a process known as synchrotron radiation.
The balance between the gain in energy from acceleration by the shock and the loss of energy due to synchrotron radiation dictates the maximum energy of the photons that can be emitted by such a system. The highest energy photons among these are classified as gamma rays and are detected by the LAT.
The observations of GRB 130427A, however, didn’t quite match energy levels predicted by these models.‘Uncover your toes’
For instance, the telescopes detected more photons, and more high-energy gamma rays, than theoretical models would predict for a burst of this magnitude. In particular, a few of these high-energy events are so energetic that they cannot be produced via existing models of synchrotron radiation from shock-accelerated particles.
Additionally, the prevailing thought was that the brightest flashes were driven by the explosion’s internal shock waves, but the evidence indicates that these photons were created externally.
“It’s like having a blanket that’s too short. You pull it up to your chin and uncover your toes,” Omodei says. “With our standard model, if you try to explain the pulse, you will fail to explain the energy.”
The new observations don’t rule out the existing model, but researchers will need to either amend portions of it or adopt a new theory altogether to account for these characteristics, says Giacomo Vianello, a postdoctoral scholar in Michelson’s group and a co-author who performed LAT data analysis and interpretation on three of the Science papers.
The microphysics of how particles are accelerated involves a certain amount of well-thought assumptions, and these assumptions therefore get built into the theoretical models used to predict the behavior of cosmic events. The assumptions are necessary in part because these events cannot be recreated in laboratory settings, he says, which highlights the critical role that observations play in the fine-tuning of fundamental physics theories.
“The really cool thing about this GRB is that because the exploding matter was traveling at the speed of light, we were able to observe relativistic shocks,” Vianello explains. “We cannot make a relativistic shock in the lab, so we really don’t know what happens in it, and this is one of the main unknown assumptions in the model.
“These observations challenge the models and can lead us to a better understanding of physics.”
Source: Stanford University
The post Gamma ray burst could upend leading physics theories appeared first on Futurity.
By firing a beam of infrared light at a stack of graphene sheets, scientists can identify and describe the electronic properties of each individual sheet—even when the sheets are covering each other up.
After shooting the beam, they measure how the direction the light wave is oscillating changes as it bounces off the layers within.
To explain further: When a magnetic field is applied and increased, different types of graphene alter the light wave’s direction of oscillation, or polarization, in different ways. A graphene layer stacked neatly on top of another will have a different effect on polarization than a graphene layer that is messily stacked.
“By measuring the polarization of reflected light from graphene in a magnetic field and using new analysis techniques, we have developed an ultrasensitive fingerprinting tool that is capable of identifying and characterizing different graphene multilayers,” says John Cerne, a University at Buffalo associate professor of physics, who led the project.Each layer is different
Cerne’s new research looks at graphene’s electronic properties, which change as sheets of the material are stacked on top of one another. The findings appeared in Scientific Reports.
So, why don’t all graphene layers affect the polarization of light the same way?
Cerne says the answer lies in the fact that different layers absorb and emit light in different ways.
The study shows that absorption and emission patterns change when a magnetic field is applied, which means that scientists can turn the polarization of light on and off either by applying a magnetic field to graphene layers or, more quickly, by applying a voltage that sends electrons flowing through the graphene.
“Applying a voltage would allow for fast modulation, which opens up the possibility for new optical devices using graphene for communications, imaging, and signal processing,” says the study’s first author, Chase T. Ellis, a former graduate research assistant at Buffalo and current post-doctoral fellow at the US Naval Research Laboratory.
Source: University at Buffalo
The post Why scientists are shooting light beams at stacks of graphene appeared first on Futurity.
A genetic defect that profoundly affects speech in humans also disrupts the ability of songbirds to sing effective courtship tunes.
This defect in a gene called FoxP2 renders the brain circuitry insensitive to feel-good chemicals that serve as a reward for speaking the correct syllable or hitting the right note, a recent study shows.
The research, which was conducted in adult zebra finches, gives insight into how this genetic mutation impairs a network of nerve cells to cause the stuttering and stammering typical of people with FoxP2 mutations. It appears today in an early online edition of the journal Neuron.
“Our results integrate a lot of different observations that have accrued on the FoxP2 mutation and cast a different perspective on what this mutation is doing,” says Richard Mooney, professor of neurobiology at Duke University School of Medicine and a member of the Duke Institute for Brain Sciences.
“FoxP2 mutations do not simply result in a cognitive or learning deficit, but also produce an ongoing motor deficit. Individuals with these mutations can still learn and can still improve; it is just harder for them to reliably hit the right mark.”Language gene?
About 15 years ago, researchers discovered a British family with many members suffering from severe speech and language deficits. Geneticists eventually pinned down the culprit—a gene called forkhead box transcription factor or FoxP2—that was mutated in all the affected individuals.Related Articles On Futurity
- Johns Hopkins UniversityAfter radiation, sleeping brain cells wake up
- Seabird bones reveal effects of ‘big’ fishing
- University of IllinoisTweet to tweet: A century of birding
The discovery led many to believe FoxP2 was a “language gene” that granted humans the ability to speak. But further studies showed that the gene wasn’t unique to humans, and in fact was conserved among all vertebrates, including songbirds.
Though the gene is present in every cell, it is “active,” or turned on, mostly in brain cells, particularly ones residing in a region deep within the brain known as the basal ganglia. This region is dysfunctional in Tourette syndrome, known for its vocal tics and outbursts, and is also shrunk in individuals with FoxP2 mutations.
To explore the complex circuitry involved in these deficits, Mooney and former graduate student Malavika Murugan, decided to replicate the human mutation in this region of the brain in songbirds.Voice lessons
Zebra finches start learning how to sing 30 days after they hatch by listening to a male tutor and then practicing thousands of times a day until, 60 days later, they are able to make a very good copy of the tutor’s song.
As good as that copy is at day 90, the male finch’s song gets even more precise when he “directs” it at a female as part of courtship.
To investigate the role of FoxP2 in the generation of this directed song, Murugan introduced specifically targeted sequences of RNA to suppress FoxP2 activity in the basal ganglia of male zebra finches. The birds were placed in a glass cage that revealed a female sitting on the other side. Murugan then recorded sonograms of their song to capture the subtle vocal variations indistinguishable to the human ear when they produced directed songs at the female.
Murugan found that though the genetically manipulated males had already learned how to sing, their ability to hit the right note repeatedly in the presence of a female—a behavior critical to attracting a mate—was subpar.
This indicates that in songbirds, FoxP2 has an ongoing role in vocal control separate from a role in learning, a distinction that has not been possible to make in humans with FOXP2 mutations.
Having deduced the behavior associated with this genetic mutation, the researchers then identified underlying neural deficits by recording brain activity in birds with normal and altered FoxP2 genes.‘Insensitive’ to dopamine
In one set of experiments, Murugan sent an electrical signal into the input side of the basal ganglia pathway and then used an electrode on the output side to measure how quickly the signal traveled from one side to the other. Surprisingly, the signal moved more quickly through the basal ganglia of FoxP2 mutant songbirds than it did in songbirds with the functional gene.
Murugan also found that dopamine, an important brain chemical involved in brain signaling and the reinforcement of learned behaviors like singing or playing sports, could influence how fast basal ganglia signals propagated in birds with normal but not mutant forms of FoxP2.
“This switch between undirected and directed song is actually dependent on the influx of this neurotransmitter called dopamine,” says Murugan, first author of the study. “So what we think is happening is knocking down FoxP2 makes the male incapable of reducing song variability in the presence of a female.
“An adult male sees the female, there is an influx of dopamine, but because the system is insensitive, the dopamine has no effect and the adult male continues to sing a variable tune.” In juveniles, the inability to constrain variability and to respond to dopamine could also account for poor learning.Songbirds and humans
Though the researchers are cautious not to draw too many parallels between their findings in birds and the deficits in humans, they think their study does highlight the value of songbirds in studying human behaviors and disease.
“Birds are one of the few non-human animals that learn to vocalize,” says Mooney. “They produce songs for courtship that they culturally transmit from one generation to the next. Their brains might be a thousandth the size of ours, but in this one dimension, vocal learning, they are our equal.”
The National Institutes of Health supported the research.
Source: Duke University
Children with Asperger’s, a high-functioning form of autism, and those with a condition known as “nonverbal learning disability” may have similar symptoms, however the underlying causes are very different, according to brain scans.
The finding, published in Child Neuropsychology, could ultimately help educators and clinicians better distinguish between—and treat—children with a nonverbal learning disability, or NVLD.Related Articles On Futurity
- University at BuffaloThink teen drivers are bad? Add ADHD
- Vanderbilt UniversitySpatial skills at 13 may predict STEM savvy
- Michigan State UniversityActivism pushes poor youth to vote
“Children with nonverbal learning disabilities and Asperger’s can look very similar, but they can have very different reasons for why they behave the way they do,” says Jodene Fine, assistant professor of school psychology in Michigan State University’s College of Education.
Understanding the biological differences in children with learning and behavioral challenges could help lead to more appropriate intervention strategies.
Children with nonverbal learning disability tend to have normal language skills but below average math skills and difficulty solving visual puzzles. Because many of these kids also show difficulty understanding social cues, some experts have argued that NVLD is related to high functioning autism—which this latest study suggests may not be so.
Fine and Kayla Musielak, a doctoral student in school psychology, studied about 150 children ages 8 to 18. Using MRI scans of the participants’ brains, the researchers found that the children diagnosed with NVLD had smaller spleniums than children with other learning disorders such as Asperger’s and ADHD, and children who have no learning disorders.
The splenium is part of the corpus callosum, a thick band of fibers in the brain that connects the left and right hemispheres and facilitates communication between the two sides. Interestingly, this posterior part of the corpus callosum serves the areas of the brain related to visual and spatial functioning.
In a second part of the study, the participants’ brain activity was analyzed after they were shown videos in an MRI that portrayed both positive and negative examples of social interaction. (A typical example of a positive event was a child opening a desired birthday present with friend; a negative event included a child being teased by other children.)
The researchers found that the brains of children with nonverbal learning disability responded differently to the social interactions than the brains of children with high functioning autism, or HFA, suggesting the neural pathways that underlie those behaviors may be different.
“So what we have is evidence of a structural difference in the brains of children with NVLD and HFA, as well as evidence of a functional difference in the way their brains behave when they are presented with stimuli,” Fine says.
While more research is needed to better understand how nonverbal learning disability fits into the family of learning disorders, Fine says her findings present “an interesting piece of the puzzle.”
“I would say at this point we still don’t have enough evidence to say NVLD is a distinct diagnosis, but I do think our research supports the idea that it might be.”
Source: University of Michigan
The post Brain anatomy separates Asperger’s from learning disability appeared first on Futurity.
Women receiving hormone replacement therapy (HRT) should not ingest pure apigenin—found in celery, parsley, and apples—because the compound may lead to a higher incidence of cancerous tumors.
The finding is a reversal from an earlier recommendation in which researchers found that apigenin could reduce the incidence of tumor growth in women receiving HRT.Related Articles On Futurity
- University of TorontoGender adds hurdles to treating tropical diseases
- Yale University'Optional' radiation still used to treat breast cancer
- Lab confirms salmonella in nationwide outbreak
Hormone replacement therapy is often prescribed to reduce the effects of hot flashes and other menopausal symptoms in women.
HRT research and clinical trials have indicated a higher incidence of tumors, especially breast cancer, in post-menopausal women who take synthetic hormones; therefore, doctors have become more reluctant to prescribe the treatment.
Now, a new study published in the journal Nutrition and Cancer shows that when apigenin is ingested in a diet at the same concentration that subjects received during IV injections in previous studies, the benefits were reversed leading to a higher incidence of cancerous tumors in subjects receiving progestin.
“Typically, hormone replacement therapies improve the lives of menopausal women and achieve very good results,” says Salman Hyder, professor in tumor angiogenesis and of biomedical sciences at the University of Missouri.Tumor trigger
“However, research has proven that in women receiving therapies that involve a combination of the natural component estrogen and the synthetic progestin, a higher incidence of breast cancer tumors can occur.”
Many women normally have benign lesions in breast tissue. These lesions don’t typically form tumors until they receive the “trigger” that attracts blood vessels to cells, essentially feeding the lesions causing them to form and expand. In this case, progestin is the trigger.
Hyder’s previous research focused on identifying natural supplements containing compounds that lessen the likelihood of tumor development and growth.
During the new study, laboratory rats were divided into four groups. Two groups were placed on a controlled diet; the other two were given a diet supplemented with apigenin. The mice that ingested apigenin through their diets were found to have a higher incidence of tumor growth.Healthy diet still important
“We know that apigenin is effective when injected directly into the bloodstream, so intravenous supplements may still be a possibility,” Hyder says.
“However, the mice that ingested apigenin began metabolizing it—which seemed to aggravate the situation causing very aggressive growth of mammary tumors.
“Women should continue consuming a healthy diet,” Hyder says. “Fruits and vegetables most likely contain other protective compounds, and there is no data to suggest that these items are harmful.
“However, we do not recommend that women who are on hormone replacement therapy with a progestin component ingest pure apigenin as a supplement until further research proves otherwise.
“Until we know how apigenin is metabolized and interact with progestin effects, we cannot recommend that women supplement their hormone replacement therapy with this compound.”
Source: University of Missouri
The post Celery compound may pose cancer risk for women on HRT appeared first on Futurity.