Adolescents who go to bed late during the school year are more prone to academic and emotional difficulties in the long run, compared to teens who turn in early.
Researchers analyzed longitudinal data from a nationally representative cohort of 2,700 US adolescents of whom 30 percent reported bedtimes later than 11:30 p.m. on school days and 1:30 a.m. in the summer in their middle and high school years.
By the time they graduated from high school, the school-year night owls had lower GPA scores, and were more vulnerable to emotional problems than teens with earlier bedtimes, according to the study published in the Journal of Adolescent Health.
The results present a compelling argument in favor of later middle and high school start times in the face of intense academic, social and technological pressures, researchers says.Related Articles On Futurity
- University of California, BerkeleyHow to help SAD sufferers sleep better
- Iowa State UniversityTeen's criminal career can start by age 5
- Georgia Institute of TechnologyTeens breathe easier with texts about asthma
“Academic pressures, busy after-school schedules, and the desire to finally have free time at the end of the day to connect with friends on the phone or online make this problem even more challenging,” says Lauren Asarnow, lead author of the study and a graduate student in University of California Berkeley’s Golden Bear Sleep and Mood Research Clinic.
On a positive note, she says the findings underscore how a healthy sleep cycle promotes the academic and emotional success of adolescents.
“The good news is that sleep behavior is highly modifiable with the right support,” says Asarnow, citing UC Berkeley’s Teen Sleep Study, a treatment program designed to reset the biological clocks of adolescents who have trouble going to sleep and waking up.
This latest study used data from the National Longitudinal Study of Adolescent Health, which has tracked the influences and behaviors of adolescents since 1994.
Focusing on three time periods—the onset of puberty, a year later and young adulthood—researchers compared how the sleep habits of 2,700 teenagers aged 13-18 impacted their academic, social and emotional development. They looked at participants’ school transcripts and other education and health data.
While going to bed late in the summer did not appear to impact their academic achievement, including grades, researchers did find a correlation between later summer bedtimes and emotional problems in young adulthood.Sleep cycle shifts at puberty
Surveys show that many teenagers do not get the recommended nine hours sleep a night, and report having trouble staying awake at school. The human circadian rhythm, which regulates physiological and metabolic functions, typically shifts to a later sleep cycle at the onset of puberty. Researchers theorize that an “evening circadian preference” in adolescence is a confluence of biological factors, as well as parental monitoring, academic and social pressures, and the use of electronic gadgetry.
For example, bright lights associated with laptops, smart phones and other electronic devices have been found to suppress melatonin, a hormone that helps regulate the sleep cycle. The earlier Teen Sleep Study uses dim lighting and limits technology before bedtime, among other interventions, to help reverse this night-owl tendency.
“This very important study adds to the already clear evidence that youth who are night owls are at greater risk for adverse outcomes,” says psychologist Allison Harvey, senior author of the paper. “Helping teens go to bed earlier may be an important pathway for reducing risk.”
Source: UC Berkeley
Experts are watching an enormous iceberg that is separating from the Antarctica continent.
Roughly the size of Manhattan, the iceberg could threaten shipping lanes.Related Articles On Futurity
- Northwestern UniversityTsunami shears off icebergs in Antarctica
- Monash UniversityMany threats spell trouble for Antarctica
- University of Texas at AustinAntarctic ice shelves tearing apart, satellite shows
Professor Grant Bigg, from the University of Sheffield’s Department of Geography, is heading the project to monitor the movement and melting of the iceberg, which recently broke off from the Pine Island Glacier. The team is working to predict its likely path and any environmental impact.
“Its current movement does not raise environmental issues, however a previous giant iceberg from this location eventually entered the South Atlantic and if this happens it could potentially pose a hazard to ships,” says Grant.
“If the iceberg stays around the Antarctic coast, it will melt slowly and will eventually add a lot of freshwater that stays in the coastal current, altering the density and affecting the speed of the current.
“Similarly, if it moves north it will melt faster but could alter the overturning rates of the current as it may create a cap of freshwater above the denser seawater.”
Grant says the iceberg wasn’t large enough to have a big impact, but could have an effect. “If these events become more common, there will be a build-up of freshwater which could have lasting effects,” he adds.
The six month project, which has been funded by the National Environment Research Council (NERC), is being co-led by Robert Marsh, from the University of Southampton.
Their work is expected to not only provide a timely warning of any consequences of the iceberg’s release to the shipping industry but also test a technique which could in the future be used by ice hazard warning services.
Source: University of Sheffield
Studying how dogs, rather than rodents, respond to experimental treatments for spinal injuries can offer a more realistic picture of how humans might respond to the same treatments.
Nick Jeffery, professor of neurology and neurosurgery in the Iowa State University College of Veterinary Medicine, says the tightly controlled laboratory conditions for rodents bear little resemblance to the clinical reality of human spinal injuries.
But pet dogs that spontaneously suffer spinal injuries can offer a much closer match.
“Lab conditions aren’t always useful for a good understanding of human clinical injuries,” he adds. “But some of what we see in dogs is a step closer to how humans may respond to new treatments.”Improvement but not magic Related Articles On Futurity
- Indiana UniversityFlame retardants 10x higher in dogs
- University of ArizonaDo other animals get a runner’s high?
- Cardiff UniversityMummy dogs were godly go-betweens
Jeffery has looked closely at two different experimental treatment methods. The first involved culturing cells that connect a dog’s brain to its nose and then transferring the cells to the spinal cord.
Jeffery is currently studying the effect of chondroitinase, an enzyme that can break down scar tissue when injected into the spinal cord of a paralyzed dog.
While the results of the cell studies fall short of miraculous—paralyzed dogs don’t suddenly start running around the park again—the treatments often result in a smoother and wider range of motion for injured dogs, he notes.
And a few individual cases have shown dramatic improvements.
“It’s an improvement, but it’s not magic,” Jeffery cautions.Finding the right dogs
Jeffery looks for specific characteristics in candidate dogs that can be accepted to receive the experimental treatment. For starters, candidates must fit a certain weight range and suffer from a severe spinal cord injury near the middle of the back.
Beyond that, Jeffery will only take candidates he’s certain won’t benefit from conventional treatments and therapy, a requirement that greatly reduces the pool of eligible subjects.
It’s common among dogs for the discs between vertebrae to degenerate and rupture, he says. But the vast majority of those cases improve with traditional treatment. It’s the dogs that don’t get better that Jeffery wants to study.
A research grant allows Jeffery to administer the treatments free of charge to dog owners whose animals are accepted.
“I’m a believer in the idea that veterinarians should be working in important areas of medical science for the benefit of both pets and people,” he says.
Source: Iowa State University
The post New treatments for spinal injuries in dogs may help people appeared first on Futurity.
A new solvent can dissolve semiconductors safely and at room temperature.
Once dissolved, the semiconductor solution can be applied as a thin film to substrates like glass and silicon. Once heated, the solvent evaporates, leaving behind only a high-quality film of crystalline semiconductor—perfect for use in electronics.Related Articles On Futurity
- Georgia Institute of TechnologyDoping graphene on edge 1,000x more potent
- Vanderbilt UniversitySturdy nanofilms beg to be touched
- University of California, Santa BarbaraRoom-temp qubit from semiconductor ‘defect’
“It’s inexpensive and easily scalable,” says Richard Brutchey, a chemistry professor at the University of Southern California (USC). “Our chemical understanding of the solvent system and how it works should allow us to expand it to the dissolution of a wide range of materials.”
While the technology already exists to “print” electronics using semiconductor “inks” at room temperature, the problem, until now, is that the only substance that could effectively dissolve semiconductors to form these inks was hydrazine—a highly toxic, explosive liquid used in rocket fuel.
Brutchey and David Webber of USC mixed two compounds to create the new solvent that effectively dissolves a class of semiconductors known as chalcogenides.
“When the two compounds work together, they do something quite remarkable,” says Brutchey.
They call the solvent an “alkahest,” after a hypothetical universal solvent that alchemists attempted to create to dissolve any and all substances. They’ve patented their alkahest, and recently presented their findings in the Journal of the American Chemical Society.
In the paper, they show how a mixture of 1,2-ethanedithiol (a colorless liquid that smells like rotten cabbage) and 1,2-ethylenediamine (a colorless liquid that smells like ammonia) is able to effectively dissolve a series of nine semiconductors made from combinations of arsenic, antimony, bismuth, sulfur, selenium and tellurium. Such semiconductors are often used in lasers, optics, and infrared detectors.
The National Science Foundation and USC funded the work.
Even among those attending the top performing high schools in California, nearly half of Latinos choose to attend community college after graduation, a new analysis shows.
The findings suggest that these young people are far more likely to attend community college than their peers from any other ethnic groups.Related Articles On Futurity
- The genetics of fear can shape political views
- University of California, Santa BarbaraStudents who feel stereotyped fail to perform
- University of VirginiaIs ‘zero tolerance’ good for schools?
Among graduates of public high schools that ranked in the top 10 percent statewide, 46 percent of Latinos enrolled at a community college, as compared to 27 percent of whites, 23 percent of African-Americans, and 19 percent of Asians.
“These findings display highly stratified patterns of college-going in California,” says lead author Lindsey Malcom-Piqueux, a senior fellow with the Center for Urban Education at the University of Southern California and an assistant professor at the George Washington University.
“They show that it’s not just preparation per se that’s driving students’ college decision making. There are a lot of other factors, from issues of cost and accessibility to state colleges limiting enrollment due to budget cuts.”Not a special interest issue
The report is one of four released by the Center for Urban Education and the Tomás Rivera Policy Institute that examine how Latinos are faring in the state’s higher education system and within Hispanic-serving institutions that enroll student populations that are 25 percent or more Latino.
Statewide, Latinos represent nearly half of the state’s college-aged population, according to the US Census Bureau.
“This is not a ‘special interest’ issue; it has very real consequences and implications for the economy of the state and the country,” says Malcom-Piqueux. “We as Californians need to pay attention to this particular issue and understand that when we invest in education and college access to four-year institutions, it’s really an investment in the future of our state.”
Chief findings of the reports include:
- Latinos continue to experience inequities in transferring to four-year institutions. While the group represented more than 43 percent of the full-time enrollment at California’s Hispanic-serving community colleges, only 33 percent of students who transferred from these schools to the California State University system were Latino. Similarly, they represented just 21 percent of students who transferred from these community colleges to the UC system.
- While Latinos represent 45 percent of California’s college-aged population, they earned just 31 percent of STEM (science, technology, engineering and math) bachelor’s degrees.
- In California’s Hispanic-serving community colleges, Latino and white students were found to earn an associate degree or certificate, transfer to a four-year institution or achieve transfer-prepared status at roughly the same rates. Sixty-five percent of first-time Latino students and 69 percent of white students successfully completed one of these milestones.
“It is in the best interest of all Californians that more Latinos earn a bachelor’s degree, that more of those who meet the admissions requirements for the University of California actually enroll, and that a larger share of the thousands of Latinos in community colleges transfer to four-year colleges,” says Estela Mara Bensimon, co-director of the Center for Urban Education.
“California’s system of higher education, especially Hispanic-Serving Institutions, will greatly influence whether California becomes a divided state with a separate and unequal Latino majority or the 21st-century model for Latino inclusiveness,” she says.
“The persistence of inequity in higher education participation and attainment will reduce the proportion of college-educated adults, which in turn will have detrimental effects on the state’s economy, workforce preparation, and the quality of life of aging baby boomers, as well as to aspirations to be a society that provides equal opportunities regardless of race or socioeconomic status.”
The post High number of Latinos in California choose community college appeared first on Futurity.
The entire process that occurs when your brain makes quick decisions based on color or motion may take place in an area just behind your forehead.
In this brain region, known as the prefrontal cortex, researchers have found that color and motion signals converge in a specific circuit of neurons.
In a study published in the journal Nature, they hypothesize that these neurons act together to make two snap judgments: Is color or motion the most relevant sensory input in the current context, and what action to take as a result.Surprising discovery
Until now, neuroscientists have believed that decisions of this sort involved two steps: One group of neurons performed a gating function to ascertain whether motion or color was most relevant to the situation, and a second group of neurons considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys, and a sophisticated computer model based on that biological data, Stanford University neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.Related Articles On Futurity
- University of MichiganWhen we lose our balance, brain catches on fast
- Purdue UniversityTool sprays brain to find tumors quickly
- Stanford UniversityNew virus could help rule out mad cow
“We were quite surprised,” says Newsome, a professor of neurobiology at Stanford School of Medicine.
He and Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, began the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure, but whose meaning and underlying mechanism we couldn’t understand,” Newsome says. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant, and the decision that the monkeys made.
“But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons, and that the key to unlocking the secret might lie in a population-level analysis of the circuit activity.”Software model of neurons
To solve this brain puzzle, the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, professor of electrical engineering at Stanford, and David Sussillo, a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process, but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explains. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.The monkeys decide
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features—motion and color. For any given presentation, the dots could move to the right or left, and the color could be red or green.
The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or, were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question.
The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question), and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again.‘Multitasking like crazy’
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred.
“The idea is really very simple,” Sussillo explains. Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: Red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second, these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
The Howard Hughes Medical Institute, the Air Force Research Laboratory, a Pioneer Award from the National Institutes of Health, and the Defense Advanced Research Projects Agency supported the work.
Source: Stanford University
The post This patch of neurons helps your brain make instant decisions appeared first on Futurity.
A new model for solar cell construction may ultimately make them less expensive, easier to manufacture, and more efficient at harvesting energy from the sun.
For solar panels, wringing every drop of energy from as many photons as possible is imperative. This goal has sent researchers on a quest to boost the energy-absorption efficiency of photovoltaic devices, but existing techniques are running up against limits set by the laws of physics.
As reported in the journal Nature, existing solar cells all work in the same fundamental way: they absorb light, which excites electrons and causes them to flow in a certain direction. This flow of electrons is electric current.
But to establish a consistent direction of their movement, or polarity, solar cells need to be made of two materials. Once an excited electron crosses over the interface from the material that absorbs the light to the material that will conduct the current, it can’t cross back, giving it a direction.
“There’s a small category of materials, however, that when you shine light on them, the electron takes off in one particular direction without having to cross from one material to another,” says Andrew M. Rappe, professor of chemistry and of materials science and engineering at the University of Pennsylvania.
“We call this the ‘bulk’ photovoltaic effect, rather than the ‘interface’ effect that happens in existing solar cells. This phenomenon has been known since the 1970s, but we don’t make solar cells this way because they have only been demonstrated with ultraviolet light, and most of the energy from the sun is in the visible and infrared spectrum.”Photon ‘coins’
Finding a material that exhibits the bulk photovoltaic effect for visible light would greatly simplify solar cell construction. Moreover, it would be a way around an inefficiency intrinsic to interfacial solar cells, known as the Shockley-Queisser limit, where some of the energy from photons is lost as electrons wait to make the jump from one material to the other.
“Think of photons coming from the sun as coins raining down on you, with the different frequencies of light being like pennies, nickels, dimes, and so on. A quality of your light-absorbing material called its ‘bandgap’ determines the denominations you can catch,” Rappe says.
“The Shockley-Queisser limit says that whatever you catch is only as valuable as the lowest denomination your bandgap allows. If you pick a material with a bandgap that can catch dimes, you can catch dimes, quarters and silver dollars, but they’ll all only be worth the energy equivalent of 10 cents when you catch them.
“If you set your limit too high, you might get more value per photon but catch fewer photons overall and come out worse than if you picked a lower denomination,” he says.
“Setting your bandgap to catch only silver dollars is like only being able to catch UV light. Setting it to catch quarters is like moving down into the visible spectrum. Your yield is better even though you’re losing most of the energy from the UV you do get.”
As no known materials exhibited the bulk photovoltaic effect for visible light, the research team worked to devise how a new one might be fashioned and its properties measured.New family of materials
Starting more than five years ago, the team began theoretical work, plotting the properties of hypothetical new compounds that would have a mix of these traits. Each compound began with a “parent” material that would impart the final material with the polar aspect of the bulk photovoltaic effect.
To the parent, a material that would lower the compound’s bandgap would be added in different percentages. These two materials would be ground into fine powders, mixed together and then heated in an oven until they reacted together. The resulting crystal would ideally have the structure of the parent but with elements from the second material in key locations, enabling it to absorb visible light.Related Articles On Futurity
- Michigan State UniversitySolar cells top Mother Nature
- Stanford UniversityTiny chaos is key for flexible solar cells
- Penn StateCooking up clear, UV-proof glasses
“The design challenge,” says Peter K. Davies, chair of the department of materials science and engineering, “was to identify materials that could retain their polar properties while simultaneously absorbing visible light. The theoretical calculations pointed to new families of materials where this often mutually exclusive combination of properties could in fact be stabilized.”
This structure is something known as a perovskite crystal. Most light absorbing materials have a symmetrical crystal structure, meaning their atoms are arranged in repeating patterns up, down, left, right, front, and back. This quality makes those materials non-polar; all directions “look” the same from the perspective of an electron, so there is no overall direction for them to flow.
A perovskite crystal has the same cubic lattice of metal atoms, but inside of each cube is an octahedron of oxygen atoms, and inside each octahedron is another kind of metal atom. The relationship between these two metallic elements can make them move off center, giving directionality to the structure and making it polar.The ‘good’ crystal structure
“All of the good polar, or ferroelectric, materials have this crystal structure,” Rappe says. “It seems very complicated, but it happens all of the time in nature when you have a material with two metals and oxygen. It’s not something we had to architect ourselves.”
After several failed attempts to physically produce the specific perovskite crystals they had theorized, the researchers succeeded with a combination of potassium niobate, the parent, polar material, and barium nickel niobate, which contributes to the final product’s bandgap.
The researchers used X-ray crystallography and Raman scattering spectroscopy to ensure they had produced the crystal structure and symmetry they intended. They also investigated its switchable polarity and bandgap, showing that they could indeed produce a bulk photovoltaic effect with visible light, opening the possibility of breaking the Shockley-Queisser limit.
Moreover, the ability to tune the final product’s bandgap via the percentage of barium nickel niobate adds another potential advantage over interfacial solar cells.
“The parent’s bandgap is in the UV range,” says Jonathan E. Spanier, professor of materials science and engineering at Drexel University. “But adding just 10 percent of the barium nickel niobate moves the bandgap into the visible range and close to the desired value for efficient solar energy conversion.
“So that’s a viable material to begin with, and the bandgap also proceeds to vary through the visible range as we add more, which is another very useful trait.”Stacked solar cells
Another way to get around the inefficiency imposed by the Shockley-Queisser limit in interfacial solar cells is to effectively stack several solar cells with different bandgaps on top of one another.
These multi-junction solar cells have a top layer with a high bandgap, which catches the most valuable photons and lets the less valuable ones pass through. Successive layers have lower and lower bandgaps, getting the most energy out of each photon, but adding to the overall complexity and cost of the solar cell.
“The family of materials we’ve made with the bulk photovoltaic effect goes through the entire solar spectrum,” Rappe says. “So we could grow one material but gently change the composition as we’re growing, resulting in a single material that performs like a multi-junction solar cell.”
“This family of materials,” Spanier says, “is all the more remarkable because it is comprised of inexpensive, non-toxic, and earth-abundant elements, unlike compound semiconductor materials currently used in efficient thin-film solar cell technology.”
The research was supported by the Energy Commercialization Institute of Ben Franklin Technology Partners, the Department of Energy’s Office of Basic Sciences, the Army Research Office, the American Society for Engineering Education, the Office of Naval Research and the National Science Foundation.
Source: University of Pennsylvania
The post Crystal structure could push the limits of solar cells appeared first on Futurity.
Choosing the right healthcare policy—a daunting task for most people—can be even more difficult for those unfamiliar with insurance terminology, researchers say.
“Selecting the best health-insurance option can be confusing, even for people who have gone through the process for many years,” says Mary Politi, an assistant professor of surgery at Washington University School of Medicine in St. Louis and the study’s lead author. “We need to do a better job communicating information about health insurance to help people make the choices that work best for them.”Related Articles On Futurity
- Stony Brook UniversityHow old is old? Aging metric outdated
- University of TorontoWords let software date Medieval writing
- University of RochesterFor better care, train doctors to be mindful
The study, one of the first to examine how well people who never have had health insurance understand key insurance terms and details, appears online in Medical Care Research and Review.
In October, US citizens began enrolling for healthcare coverage expanded under the Affordable Care Act. The plans take effect as early as Jan. 1; open enrollment continues until March 31.
Findings from the study suggest that healthcare navigators—workers hired under the federal law to help people sign up for health insurance—will play an important role. The navigators could simplify details, use visuals and provide context for unfamiliar terms to help people better understand their health insurance choices, the study’s authors says.
Researchers examined how well people who have been without health insurance understand such key terms as coinsurance, deductible, out-of-pocket maximum, prior authorization, and formulary. (The latter is a list of medications that are approved under a health insurance policy.) Those terms were among the most difficult for study participants, 51 uninsured Missourians from rural, urban, and suburban parts of the state.
The study also found that:
- People who have been without health insurance but have had experience with auto insurance were more familiar with deductibles.
- Those who have had health insurance understood more terms than those who have never had it.
- Even individuals who have had previous experience with health insurance confused the meaning of similar terms, such as urgent care and emergency care or co-insurance and co-payment.
Based on their findings, the researchers are testing ways to improve communication about health insurance and the newly created state and federal health insurance exchanges. This effort is especially important for individuals with limited health literacy and math skills, given the complex information required to understand plan differences, Politi says.
Carbon dioxide isn’t just a metabolic waste product—it’s a biological signaling molecule, too, according to new research.
Researchers have shown that the body senses carbon dioxide directly through the protein Connexin 26, which acts as a receptor for the gas. Connexin 26 is better known as forming a direct channel of communication between cells. This new work shows an unexpected function for Connexin 26—as a receptor for carbon dioxide.Related Articles On Futurity
- Rice University Carbon not the only culprit in global warming?
- University of WashingtonPolar sea ice: Down but not out
- University of WashingtonBurst of light toggles cell metabolism
This finding therefore adds carbon dioxide to the list of gaseous signaling molecules, such as nitric oxide, carbon monoxide, and hydrogen sulphide already known to be active in mammals.
“As Connexin 26 is present in many tissues and organs, for example the brain, skin, inner ear, liver, and the uterus during pregnancy, this discovery should herald a re-evaluation of the potential for carbon dioxide signaling in many different processes such as the control of blood flow, breathing, hearing, reproduction, and birth,” says Professor Nick Dale of the University of Warwick.
Carbon dioxide is the by-product of metabolism in all cells. Dissolved carbon dioxide can combine with water to increase acidity in the blood. As mammals produce carbon dioxide at a fast rate, it is vital that the body measures its levels so that breathing rates can be adjusted to exhale excess carbon dioxide and thus regulate blood pH within the narrow limits compatible with life.
Until now the body’s regulation of blood acid levels was thought to be triggered by measuring pH levels of the blood. However the new findings indicate that the body can sense carbon dioxide levels directly through Connexin 26.
“Carbon dioxide is the unavoidable by-product of our metabolic system—human beings and other mammals produce huge amounts of it every day,” says Dale.
“The exciting implication of our study is that carbon dioxide is much more than just a waste product: it can directly signal physiological information, and our work shows the mechanism by which this happens via Connexin 26.”
Connexin 26 comprises six identical subunits. Carbon dioxide makes a chemical bond to the side chain a particular amino acid. This modified side chain can then form a bridge to a closely oriented amino acid in the adjacent subunit. A total of six carbon dioxide molecules can bind to make six bridges between subunits. These bridges force the Connexin 26 protein to alter its conformation thereby signaling the presence and concentration of carbon dioxide.
The study appears in the journal eLife. Co-authors are Louise Meigh, Sophie Greenhalgh, and David Roper of the University of Warwick and Thomas Rodgers and Martin Cann of the University of Durham.
Source: Warwick University
The pattern of scores on cognitive tests may help doctors determine if an older patient’s minor memory loss is benign or a stop on the road to Alzheimer’s dementia.
Related Articles On Futurity
- Northwestern UniversityOlive oil compound may fight Alzheimer's
- Brown UniversityNeurons grow best on Alzheimer's protein
- University of MichiganLiving wills: End-of-life care on your terms
“If we are going to have any hope of helping patients with Alzheimer’s disease, we need to do it as early as possible,” says David J. Schretlen, professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. “Once the brain deteriorates, there’s no coming back.”
A diagnosis of mild cognitive impairment markedly increases the risk that a patient will develop dementia eventually, even though that relatively minor initial decline does not seriously interfere with daily life, Schretlen says.
But physicians now have no reliable way to predict which people with mild cognitive impairment are likely to be in the 5 to 10 percent a year who progress to dementia.
Schretlen’s team analyzed records of 528 people 60 and older referred for cognitive testing as part of a dementia work-up. The results were compared with those of 135 healthy older adults who participated in a study of normal aging. Both groups had completed tests of memory, language, attention, processing speed, and drawing abilities from which 13 scores were recorded.Varying levels of dementia
Since anyone is naturally more skilled in some areas than others, the scores of healthy adults can be represented on a graph showing a symmetrical, bell-shaped range: Most of their scores were high, a few were a bit lower, and a few were even lower.
People with such symmetrical, evenly distributed scores were not likely to develop dementia, even if those scores were relatively low. But those with clearly lopsided test score distributions on the 13 tests were already experiencing varying levels of dementia, the researchers found.
“Departures from the normal bell-shaped pattern of variability on cognitive tests might determine which people with low scores develop dementia,” says Schretlen, leader of a study published in the journal Neuropsychology.Asymmetrical bell curve
At the outset, Alzheimer’s disease subtly disrupts some mental abilities, leaving others intact. So, well before a person develops clear cognitive impairment, his or her performance declines slightly on a few measures. When shown on a graph, these changes cause a healthy symmetric, bell-shaped curve to shift and become asymmetrical.
Since these declines can be subtle, the researchers also increased the precision of cognitive testing by accounting for the effects of age, sex, race, and education on test performance.
The challenge for doctors, is that most normal, healthy people will produce a few low scores on cognitive testing. That makes it nearly impossible to know at the outset whether a patient who reports forgetfulness and produces one or two low scores has a benign form of mild cognitive impairment or is in the earliest stage of dementia. As a result, doctors often tell such patients to return for follow-up testing in a year or two.
But if future research confirms it, this new statistical model could help doctors get the prognosis right earlier in the disease, at the first visit, and start treating patients accordingly. Doctors could use the new model to reassure patients who are not at risk of dementia, while starting interventions for those who are, Schretlen says.Time for Alzheimer’s counseling
Because there currently are no effective medical treatments for Alzheimer’s disease, those likely headed that way could be counseled to take the good time they have to organize their affairs and do things they have always wanted to do. They also could be fast-tracked into any clinical trials of medications designed to slow the progression of dementia.
The main reason it is difficult now to tell whether older people have benign mild cognitive impairment or early stage dementia is that they have not been routinely screened for cognitive impairment, Schretlen says.
A visit to a specialist comes only after someone has noticed symptoms, and then cognitive testing is interpreted without the benefit of a baseline assessment. What would solve this problem, he says, would be for everyone over the age of 55 to get routine neurocognitive testing every five years.
The Therapeutic Cognitive Neuroscience Fund; the Benjamin and Adith Miller Family Endowment on Aging, Alzheimer’s and Autism; the William and Mary Ann Wockenfuss Research Fund Endowment; and the National Institutes of Health.
Under an agreement with Psychological Assessment Resources Inc., Schretlen is entitled to a share of royalties on sales of a test and software used in the study. The terms of this arrangement are being managed by Johns Hopkins University in accordance with its conflict-of-interest policies.
Source: Johns Hopkins University
The post ‘Lopsided’ test scores may predict Alzheimer’s sooner appeared first on Futurity.
Zinc can “starve” Streptococcus pneumoniae microbes by preventing their uptake of an essential metal.
The finding opens the way for further work to design antibacterial agents in the fight against the deadly microbes, which are responsible for more than a million deaths a year. The bacteria can kill children, the elderly, and other vulnerable people by causing pneumonia, meningitis, and other serious infectious diseases.Related Articles On Futurity
- Michigan State UniversityIn China, antibiotics on farms pose global risk
- University of LeedsHospital ‘superbugs’ float in the air
- New York UniversityHow aging cells affect bacterial growth
Project leader Christopher McDevitt, from the University of Adelaide’s Research Centre for Infectious Diseases, says the study finds that zinc “jammed shut” a protein transporter in the bacteria so it could not take up manganese.
Manganese is an essential metal that Streptococcus pneumoniae needs to invade humans.
“It’s long been known that zinc plays an important role in the body’s ability to protect against bacterial infection, but this is the first time anyone has been able to show how zinc actually blocks an essential pathway, causing the bacteria to starve,” McDevitt says.
Professor Bostjan Kobe of the School of Chemistry and Molecular Biosciences at the University of Queensland says, “We can now see, at an atomic level of detail, how this transport protein is responsible for keeping the bacteria alive by scavenging one essential metal (manganese), but at the same time also makes the bacteria vulnerable to being killed by another metal (zinc).”
Professor Matt Cooper from the Institute for Molecular Bioscience (IMB) says antibiotic-resistant strains of Streptococcus pneumoniae emerged more than 30 years ago, with up to 30 percent of these bacterial infections now considered multi-drug resistant.
“The Centers for Disease Control classify multi-drug resistant Streptococcus pneumoniae as a serious threat, with more than one million cases per year in the US alone,” says Cooper.
“New treatments are urgently needed and our research has provided insights into how the uptake of metal ions affects the ability of Streptococcus pneumoniae to cause disease.”Jammed shut
The study reveals that the bacterial transporter (PsaBCA) uses a “spring-hammer” mechanism that binds zinc and manganese in different ways because of their difference in size.
The smaller size of zinc means that when it binds to the transporter, the mechanism closes too tightly around the zinc, causing an essential spring in the protein to unwind too far, jamming it shut, and blocking the transporter from being able to take up manganese.
McDevitt says that without manganese, the immune system could easily clear the body of these bacteria.
“For the first time, we understand how these types of transporters function,” he says.
“With this new information we can start to design the next generation of antibacterial agents to target and block these essential transporters.”
The research, funded by the Australian Research Council and the National Health and Medical Research Council, appears in Nature Chemical Biology.
Source: University of Queensland
To offer great customer service, institutions should consider a potential hire’s personality and interpersonal skills, in addition to technical skills, new research suggests.
The paper, published in the Journal of Applied Social Psychology, found that individuals who are identified through tests as highly conscientious are more likely to be aware of how good interpersonal interactions positively affect customer service—and are more likely to behave this way.Related Articles On Futurity
- University of PittsburghWorking night shift boosts diabetes risk
- University of MissouriMobile slow to replace print newspapers
- Duke UniversityStock tip: Listen to the CEO's voice
Stephan Motowidlo, a psychology professor at Rice University and the study’s lead author, says that while technical knowledge of a position is an important factor in successful job performance, it is only one part of the performance equation.
“Performance in a professional service capacity is not just knowing about what the product is and how it works, but how to sell and talk about it,” Motowidlo says.
He notes that historically institutions have been very good at examining the technical side of individuals’ jobs through IQ tests. He says that recently there has been an interest in the nontechnical side—the “softer, interpersonal” side.
“Much like intelligence impacts knowledge acquisition—driving what you learn and how much you know—personality traits impact how interpersonal skills are learned and used,” Motowidlo says.
“People who know more about what kinds of actions are successful in dealing with interpersonal service encounters—such as listening carefully, engaging warmly, and countering questions effectively—handle them more effectively, and their understanding of successful customer service is shaped by underlying personality characteristics.”
The research was conducted in two parts. Part one included a group of 99 participants—undergraduates enrolled in a psychology course at a small, private Southwestern university. Part two included a group of approximately 80 participants—employees at a community service volunteer agency.
In both parts of the study, participants completed a questionnaire ranking 50 customer-service encounters as effective or ineffective. Both parts of the study revealed that people who were accurate in judging the effectiveness of customer-service activities behaved more effectively and displayed higher levels of conscientiousness.
Motowidlo says he hopes the study will encourage future research about how personality helps individuals acquire the knowledge they need to perform their jobs effectively.
Rice University funded the study.
Source: Rice University
The post Personality traits linked to better customer service appeared first on Futurity.
A father’s cocaine use may make his sons—but not his daughters—less sensitive to the drug and therefore more likely to resist addictive behaviors, research with rats shows.
A new study presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience, suggests cocaine causes epigenetic changes—that is alterations to DNA that do not involve changing the sequence—in sperm in which reprogrammed information is transmitted down to the next generation of men.Related Articles On Futurity
- Brown UniversityWhat to expect when teens quit smoking
- Yale UniversitySimple test may predict alcoholism
- After quitting, smokers calm down
Last year, researchers found that cocaine abuse in a male rat rendered the next generation of animals resistant to the rewarding properties of the drug—those offspring were less likely to take cocaine.
They found changes in the brain-derived neurotrophic factor (BDNF), which is a molecule known to be important for the rewarding efficacy of cocaine, but only by looking at molecular signaling pathways in progeny that had never experienced it.
In the current study, the authors focused on the physiology of neurons before and after taking cocaine in the offspring of cocaine-experienced fathers, and found that they were less sensitive to the drug and less likely to succumb to addictive behaviors.Less-sensitive neurons
In short, not only are rat offspring of cocaine-abusing fathers less likely to take the drug on their own volition, they are less likely to become addicted to it if they are administered it.
In male rats whose fathers used cocaine, the neurons in the nucleus accumbens were less sensitive to cocaine.
That is, repeated cocaine use in the sons of cocaine-experienced fathers did not cause remodeling of excitatory AMPA receptors, which is thought to be critical for the development of addiction and cocaine craving.
“This adds to the growing body of evidence that cocaine abuse in a father rat can affect how his sons may respond to the drug—and point to potential mechanisms that contribute to this phenomenon,” says Mathieu Wimmer, a postdoctoral researcher in the laboratory of R. Christopher Pierce, associate professor of neuroscience in psychiatry at the University of Pennsylvania.
“Further research is needed to better understand how these behavior changes are passed down from one animal generation to the next, and eventually if the same holds true for humans.”
The National Institutes of Health and the National Institute on Drug Abuse supported the research.
Source: University of Pennsylvania
A four-million-year-old skull uncovered in Tibet fleshes out the fossil record of big cats and challenges theories about how and where they evolved.
The skull from the new species Panthera blytheae, a relative of the snow leopard, was excavated and described by a team led by Jack Tseng, a PhD student at the University of Southern California at the time of the discovery and now a postdoctoral fellow at the American Museum of Natural History in New York.Related Articles On Futurity
- University of SheffieldFossils show early love of sun and sex
- University of MichiganDinosaurs reacted fast to avoid slip-ups
- Yale University'Missing link' snake slithered near T. rex
“This find suggests that big cats have a deeper evolutionary origin than previously suspected,” Tseng says. The announcement was made in a paper published in the Proceedings of the Royal Society B: Biological Sciences.
DNA evidence suggests that the so-called “big cats”—the Pantherinae subfamily, including lions, jaguars, tigers, leopards, snow leopards, and clouded leopards—diverged from their nearest evolutionary cousins, Felinae, which includes cougars, lynxes, and domestic cats, about 6.37 million years ago. However, the oldest fossils of big cats previously found are tooth fragments uncovered at Laetoli in Tanzania, the famed hominin site excavated by Mary Leakey in the 1970s, dating to just 3.6 million years ago.
Using magnetostratigraphy—dating fossils based on the distinctive patterns of reversals in the Earth’s magnetic field, which are recorded in layers of rock—Tseng and his team were able to estimate the age of the skull at between 4.10 and 5.95 million years old.Spread out from central Asia
The find not only challenges previous suppositions about the evolution of big cats, it also helps place that evolution in a geographical context. The find occured in a region that overlaps the majority of current big cat habitats, and suggests that the group evolved in central Asia and spread outward.
In addition, recent estimates suggested that the genus Panthera (lions, tigers, leopards, jaguars, and snow leopards) did not split from genus Neofelis (clouded leopards) until 3.72 million years ago—which the new find disproves.
Tseng, his wife Juan Liu, and Gary Takeuchi of the Natural History Museum of Los Angeles County and the Page Museum at the La Brea Tar Pits discovered the skull in 2010 while scouting in the remote border region between Pakistan and China—an area that takes a bumpy seven-day car ride to reach from Beijing.
Liu found over one hundred bones that were likely deposited by a river eroding out of a cliff. There, below the antelope limbs and jaws, was the crushed, but largely complete remains of the skull.
“It was just lodged in the middle of all that mess,” Tseng says.
For the past three years, Tseng and his team have used both anatomical and DNA data to determine that the skull does, in fact, represent a new species. They plan to return to the site where they found the skull in the summer to search for more specimens.
The National Basic Research Program of China, the Chinese Academy of Sciences, the National Science Foundation, the American Museum of Natural History, the Smithsonian Institution (National Museum of Natural History), and the National Geographic Society funded the research.
New research makes it possible to optimize phosphors—a key component in white LED lighting—allowing for brighter, more efficient lights.
“These guidelines should permit the discovery of new and improved phosphors in a rational rather than trial-and-error manner,” says Ram Seshadri, a professor in the department of materials at University of California, Santa Barbara, as well as in the department of chemistry and biochemistry, of the findings.
The results of this research, performed jointly with materials professor Steven DenBaars and postdoctoral associate researcher Jakoah Brgoch, appear in The Journal of Physical Chemistry C.
LED (light-emitting diode) lighting has been a major topic of research due to the many benefits it offers over traditional incandescent or fluorescent lighting. LEDs use less energy, emit less heat, last longer, and are less hazardous to the environment than traditional lighting.
Already utilized in devices such as street lighting and televisions, LED technology is becoming more popular as it becomes more versatile and brighter.Related Articles On Futurity
- Princeton UniversitySay cheese! Scrambled light's super-crisp shots
- Iowa State UniversityThe unnatural charm of metamaterials
- California Institute of TechnologyEngineers isolate light on photonic chip
According to Seshadri, all of the recent advances in solid-state lighting have come from devices based on gallium nitride LEDs, a technology that is largely credited to UC Santa Barbara materials professor Shuji Nakamura, who invented the first high-brightness blue LED.
In solid-state white lighting technology, phosphors are applied to the LED chip in such a way that the photons from the blue gallium nitride LED pass through the phosphor, which converts and mixes the blue light into the green-yellow-orange range of light. When combined evenly with the blue, the green-yellow-orange light yields white light.
The notion of multiple colors creating white may seem counterintuitive. With reflective pigments, mixing blue and yellow yields green; however, with emissive light, mixing such complementary colors yields white.Finding a good host
Until recently, the preparation of phosphor materials was more an art than a science, based on finding crystal structures that act as hosts to activator ions, which convert the higher-energy blue light to lower-energy yellow/orange light.
“So far, there has been no complete understanding of what make some phosphors efficient and others not,” Seshadri says. “In the wrong hosts, some of the photons are wasted as heat, and an important question is: How do we select the right hosts?”
As LEDs become brighter, for example as they are used in vehicle front lights, they also tend to get warmer, and, inevitably, this impacts phosphor properties adversely.
“Very few phosphor materials retain their efficiency at elevated temperatures,” Brgoch says. “There is little understanding of how to choose the host structure for a given activator ion such that the phosphor is efficient, and such that the phosphor efficiency is retained at elevated temperatures.”
However, using calculations based on density functional theory, the researchers have determined that the rigidity of the crystalline host structure is a key factor in the efficiency of phosphors: The better phosphors possess a highly rigid structure.
Furthermore, indicators of structural rigidity can be computed using density functional theory, allowing materials to be screened before they are prepared and tested.More and more efficient
This breakthrough puts efforts for high-efficiency, high-brightness, solid-state lighting on a fast track. Lower-efficiency incandescent and fluorescent bulbs—which use relatively more energy to produce light—could become antiquated fixtures of the past.
“Our target is to get to 90 percent efficiency, or 300 lumens per watt,” says DenBaars, who also is a professor of electrical and computer engineering and co-director of the Solid State Lighting & Energy Center.
Current incandescent light bulbs, by comparison, are at roughly 5 percent efficiency, and fluorescent lamps are a little more efficient at about 20 percent.
“We have already demonstrated up to 60 percent efficiency in lab demos,” DenBaars says.
Source: UC Santa Barbara
When schools offer healthy snacks for lunch or in vending machines, children’s diets improve.
“When healthful food options are offered, students will select them, eat them, and improve their diet,” says Katherine Alaimo, associate professor of food science and human nutrition at Michigan State University.Related Articles On Futurity
- University of MichiganBullfrog farms spread killer fungus worldwide
- University of FloridaFruit could taste better with far-red light
- University at BuffaloFor infants, high-carb diet sets metabolism
“Our study shows that schools can make the kinds of changes required by the forthcoming USDA guidelines, and these changes can have a positive impact on children’s nutrition.”
The US Department of Agriculture will ask schools to implement its “Smart Snacks” nutrition standards on July 1, 2014. The recommendations will set limits on calories, salt, sugar, and fat in foods and beverages, as well as promote snack foods with more whole grains, low-fat dairy, fruits and vegetables.
For the study published in Child Obesity, researchers tested standards similar to the USDA’s new requirements and demonstrated that Smart Snacks has the potential to improve students’ eating habits.Schools can sway kids’ diets
For example, schools that started healthful snacks in lunchtime a la carte or vending programs boosted their students’ overall daily consumption of fruit by 26 percent, vegetables by 14 percent, and whole grains by 30 percent. Students also increased their consumption of fiber, calcium, and vitamins A and C.
Researchers also compared schools that adopted a variety of nutrition programs and policies. Some schools made only limited changes, while others implemented more comprehensive programs to assess and improve the school’s nutrition environment.
Changes schools made included raising nutrition standards for snacks and beverages, offering taste tests of healthful foods and beverages to students, marketing healthful foods in school, and removing advertisements of unhealthful foods.
When schools implemented three or more new nutrition practices or policies, students’ overall diets improved.
“Creating school environments where the healthy choice is the easy choice allows students to practice lessons learned in the classroom and form good habits at an early age, laying a foundation for a healthy future,” says Shannon Carney Oleksyk, contributing author and healthy living adviser for Blue Cross Blue Shield of Michigan.
Researchers say what made the study unique, in part, was that they measured students’ overall diets, not just what they ate in school.
The Robert Wood Johnson Foundation’s Healthy Eating Research program and Michigan State’s AgBioResearch supported the project.
Source: Michigan State University
Given the impact of technology and social media on the media landscape, we need a consistent definition of “journalist,” argue researchers.
Recent debates in the US Senate about federal shield laws, which are laws protecting journalists from being forced to reveal their sources by judges during trials, as well as recent newsworthy events such as Edward Snowden’s and Bradley Manning’s release of US government secrets, have created questions as to how a journalist should be legally defined in today’s society.Related Articles On Futurity
- University of California, DavisLegal evidence in the form of a tweet
- Michigan State UniversityHigh stress in forensic sleuthing
- University of California, DavisGun sellers: Criminals buy arms too easily
Edson Tandoc, Jr., a doctoral candidate at the University of Missouri School of Journalism, has compiled the following broad definition of a journalist, based on extensive research on how society currently describes the role: A journalist is someone employed to regularly engage in gathering, processing, and disseminating news and information to serve the public interest.
“New technology has increased access to mass communication for many people, but simply having the ability to communicate on a large scale does not make a person a journalist,” Tandoc says. “In this age of information overload, it is vital for people to understand which information is trustworthy and which information is unreliable. It is also important to protect those sources of trustworthy information.”
In an article published in the online edition of the New York University Journal of Legislation and Public Policy, Tandoc and co-author Jonathan Peters, a media lawyer and assistant professor at Dayton University, examine definitions from scholarly texts, legal documents, and membership criteria of professional organizations of journalists to understand how the concept is defined across these domains.
They find that in journalism industry definitions, a recurring theme was employment, or being compensated monetarily for journalistic work. In legal and scholarly definitions, the researchers found a focus on social roles, such as government watchdogs or consumer protectors.
“We believe the definition we compiled is broad enough to include many new, pioneering forms of journalism,” Tandoc says. “However, by the journalism industry referencing employment, this definition excludes many people who engage in new forms of communication, such as unpaid bloggers and citizen journalists who gather, process, and disseminate news and information on matters of public concern—just because they do not derive their primary source of livelihood from their journalistic activities.”
Tandoc says a new definition of who qualifies as a journalist should not only move away from employment, but medium as well.
“It appears that there is a move in the journalism industry to do away with tying the definition to a specific medium. This is definitely a reflection of the changing times, as journalists no longer work for a single medium,” Tandoc adds.
Source: University of Missouri
Some young adults in the UK are proving wrong the adage “you can’t go home again,” as unemployment and other factors cause them to give up their independence to go back to live with their parents.
“The idea of a generation of young adults ‘boomeranging’ back to the parental home has recently gained widespread currency in the British press,” says Juliet Stone, a researcher at the ESRC Centre for Population Change (CPC) at the University of Southampton. “Our research aims to clarify this and examine the factors that contribute to their decision to return home.”Related Articles On Futurity
- University of WarwickFor tiny chance to prosper, we tolerate inequality
- University of California, DavisAcademic probation hits college guys harder
- Rice UniversityPoor in US live 5 years less than rich
For a new study published in the journal Demography, Stone and colleagues used the long-running British Household Panel Survey (BHPS) to examine how major changes in young adults’ lives contribute to their decision to return to the safety-net of the parental home.
The survey, that began in 1991 and was aimed at understanding social and economic change at the individual and household level, interviewed 5,000 young men and women in their 20s and 30s every year until 2008.
The results indicate that overall, the act of returning to the parental home is in fact relatively uncommon, with an average of only 2 percent of young adults returning during the 17 years to 2008. There has been little change in the likelihood of returning over time, apart from among women in their early 20s.
The researchers suggest this reflects the rising number of young women going to university, who then return home after completing their studies. Returning is also much more common when young adults are in their early 20s and remains a relatively rare event once they reach their 30s.All roads lead to home?
However, returning home is prevalent for certain subgroups of young adults, even when they reach their early 30s. Specific findings indicate that:
- After completing full-time education, around half of the surveyed men and women in their early 20s return home.
- About one-third of men and women who experience a relationship break-up return home.
- Men are more likely to live in the parental home than women, although the gender gap is narrowing.
- The association between economic disadvantage and living in the parental home has strengthened, especially among men.
“The study shows that completing higher education is one of the strongest determinants of returning to the parental home,” Stone says.
“With the labor market becoming more unpredictable, there are no guarantees of employment for graduates and where in past decades the expectation was that upon completion of their course they would move straight into employment, this can no longer be relied upon in the same way.”
“Finishing full-time education continues to be the major reason for returning to the parental home—to the extent that this is now considered ‘normal’ for young adults in their early twenties,” says Professor Ann Berrington.
“This is particularly striking in the current British context of recession, increased university tuition fees and rising student debt.”Breaking up is hard to do
Although relationship break-ups have been identified as a major factor influencing young people’s decision to return, this may depend on the young person’s gender and whether or not they have dependent children.
The researchers speculate that after a break-up, mothers and fathers may find support from different sources, with young lone mothers being more able to rely on the welfare state, and young, single, non-resident fathers requiring more support from their own parent(s).
However, more generally, the recent trend to form relationships later in life and the growing popularity of higher education has led to women now showing a greater similarity to men in their destinations on leaving home and the likelihood of returning to the parental home.
Source: University of Southampton
The post Boomerang generation: Why young adults return home appeared first on Futurity.
An approach called “coactive learning” lets humans give robots feedback to find the best way to do a job, report engineers.
“We give the robot a lot of flexibility in learning,” says Ashutosh Saxena, assistant professor of computer science at Cornell University. “We build on our previous work in teaching robots to plan their actions, then the user can give corrective feedback.”Related Articles On Futurity
- California Institute of TechnologyConvert microscope into billion-pixel imager for $200
- University of WashingtonCrab shell ‘fabric’ patches damaged nerves
- University of FloridaSwarms of tiny drones built to spy on hurricanes
Saxena’s research team will report their work at the Neural Information Processing Systems conference in Lake Tahoe, California, on December 5-8.
Modern industrial robots, like those on automobile assembly lines, have no brains, just memory. An operator programs the robot to move through the desired action—the robot can then repeat the exact same action every time a car goes by.
But off the assembly line, things get complicated: A personal robot working in a home has to handle tomatoes more gently than canned goods. If it needs to pick up and use a sharp kitchen knife, it should be smart enough to keep the blade away from humans.Feedback for robots
Saxena’s team, led by PhD student Ashesh Jain, set out to teach a robot to work on a supermarket checkout line. They modified a Baxter robot from Rethink Robotics in Boston, which designed it for assembly line work. Baxter can be programmed by moving its arms through an action, but also offers a mode where a human can make adjustments while an action is in progress.
The Baxter’s arms have two elbows and a rotating wrist, so it’s not always obvious to a human operator how best to move the arms to accomplish a particular task. So the researchers, drawing on previous work, added programming that lets the robot plan its own motions. It displays three possible trajectories on a touch screen where the operator can select the one that looks best.
Then humans can give corrective feedback. As the robot executes its movements, the operator can intervene, guiding the arms to fine-tune the trajectory. The robot has what the researchers call a “zero-G” mode, where the robot’s arms hold their position against gravity but allow the operator to move them.
The first correction may not be the best one, but it may be slightly better. The learning algorithm the researchers provided allows the robot to learn incrementally, refining its trajectory a little more each time the human operator makes adjustments. Even with weak but incrementally correct feedback from the user, the robot arrives at an optimal movement.
The robot learns to associate a particular trajectory with each type of object. A quick flip over might be the fastest way to move a cereal box, but that wouldn’t work with a carton of eggs. Also, since eggs are fragile, the robot is taught that they shouldn’t be lifted far above the counter. Likewise, the robot learns that sharp objects shouldn’t be moved in a wide swing; they are held in close, away from people.
In tests with users who were not part of the research team, most users were able to train the robot successfully on a particular task with just five corrective feedbacks. The robots also were able to generalize what they learned, adjusting when the object, the environment, or both were changed.
The US Army Research Office, a Microsoft Faculty Fellowship, and the National Science Foundation supported the research.
Source: Cornell University
Scientists may have found a way to speed up the detection of bacterial infection in dialysis patients by using the patient’s immune system to identify the pathogen.
Dialysis patients with chronic kidney disease need fast and accurate diagnosis of infection so doctors can administer the correct antibiotic treatment to ensure a fair chance of recovery. Nicholas Topley and Matthias Eberl from Cardiff University’s School of Medicine have shown proof-of-concept that a patient’s unique immune response to infection can be used to accurately detect within hours which organism is causing infection.
Together with commercial partners, the group is using these new insights, which they call “immune-fingerprints,” to inform the development of a point-of-care test.Related Articles On Futurity
- Vanderbilt UniversityHow mosquitoes fight off parasite stowaways
- Emory UniversityHow childhood virus leads to adult asthma
- Boston UniversityNew path may lead to better HIV vaccine
“Infection is the biggest obstacle for any dialysis patient as it can seriously hamper their treatment and their chances of leading a normal life,” says Topley. “Through my own experience as a transplant patient, my research in dialysis patients over the past 25 years and in talking to patient groups, I observed that conventional tests just aren’t quick enough and are often inconclusive, which can be a fatal shortcoming.
Topley says they decided that more needed to be done to give patients every chance for a successful recovery and began looking to the body’s own natural defenses for inspiration.
“The immune system is capable of rapid, sensitive, and specific detection of a broad spectrum of microbes, which has been optimized over millions of years of evolution,” explains Matthias Eberl from Cardiff’s Institute of Infection and Immunity.
“A patient’s early immune response is therefore likely to provide a far better insight into the true nature and severity of microbial infections than current tests, which are based on the microbiological identification of the potential pathogen—a concept introduced by Robert Koch more than a century ago.”Distinct immune signature
To test this theory, scientists performed a detailed immunological and microbiological analysis of samples obtained from peritoneal dialysis patients with acute infection/peritonitis (a condition in which the thin tissue that lines the inner wall of the abdomen becomes inflamed). Laboratory tests revealed that each bacterial infection leaves a distinct immune signature and robustly discriminates between different types of infection.
This is the first time that scientists have attempted—or succeeded—to distinguish soluble and cellular components in defining responses to specific germs in an infected human and to translate the idea of immune fingerprints into a potential diagnostic tool. The research findings are published in the Journal of the American Society of Nephrology.
The data provide proof-of-concept that using immune fingerprints to inform the design of point-of-care tests will help target antibiotic prescriptions and improve patient management. With the immune fingerprint test, doctors would be able to differentiate rapidly between serious and benign infections and be able to prescribe suitable and accurate treatments.
A Baxter Healthcare’s Renal Discoveries Extramural Grant, the National Institute for Social Care and Health Research, and the Welsh Government funded the research.
Source: Cardiff University
The post Use ‘fingerprints’ to quickly detect infection in dialysis patients appeared first on Futurity.